Kubernetes vs. Docker Swarm: A Reality Check for Nordic Infrastructure Teams
Let’s be honest: 90% of the companies deploying Kubernetes today don't actually need it. I said it. In the rush to adopt "cloud-native" architectures, I've watched brilliant engineering teams in Oslo and Bergen burn weeks of man-hours configuring Ingress controllers and debugging CNI plugins for workloads that could have run on a simple Docker Swarm cluster or even a well-scripted systemd setup.
But when you do need it, nothing else suffices. The trick isn't knowing how to deploy Kubernetes; it's knowing when to avoid it. Today, we are stripping away the marketing fluff. We will look at container orchestration from the perspective of a SysAdmin who gets paged at 3 AM. We’ll cover the actual performance overhead, the storage requirements that most VPS providers hide, and the specific legal implications of running these clusters on Norwegian soil post-Schrems II.
The Latency Trap: Why Your Cluster Feels Slow
I recall a migration project for a FinTech startup based near Barcode in Oslo. They moved from a monolith to microservices on a generic European cloud provider. Immediately, their API response times spiked. They blamed the code. They blamed the database.
The culprit was etcd. Kubernetes relies heavily on etcd for state management, and etcd is notoriously sensitive to disk write latency. If your underlying infrastructure steals IOPS or suffers from "noisy neighbor" syndrome—common in budget VPS hosting—your cluster creates a bottleneck before a single packet hits your application.
To verify if your current nodes can handle a production K8s control plane, you shouldn't guess. You should benchmark fsync latency. Here is how we verify node readiness on CoolVDS infrastructure before approving a cluster deployment:
# Install fio to test disk latency
apt-get install -y fio
# Run a test simulating etcd write patterns
fio --rw=write --ioengine=sync --fdatasync=1 \
--directory=/var/lib/etcd --size=100m \
--bs=2300 --name=etcd_benchmark
If the 99th percentile latency (fdatasync) exceeds 10ms, your cluster will be unstable. On CoolVDS NVMe instances, we typically see this under 2ms. This isn't just a spec sheet number; it makes the difference between a self-healing cluster and a split-brain disaster.
Docker Swarm: The Pragmatic Choice
For teams managing fewer than 50 microservices, Docker Swarm is arguably superior due to its low cognitive load. You don't need a dedicated DevOps engineer just to manage the control plane. Swarm is baked into the Docker engine.
Consider a standard deployment. In Kubernetes, you need Deployments, Services, Ingress, and ConfigMaps. In Swarm, you just need a standard docker-compose.yml file:
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
To deploy this on a CoolVDS cluster, the command is delightfully boring:
docker stack deploy -c docker-compose.yml production_stack
The simplicity allows you to focus on application logic rather than YAML indentation. However, Swarm lacks the rich ecosystem of Helm charts and operators. If you need complex stateful sets or automated database sharding, Swarm's limitations become painful quickly.
Kubernetes: The Heavy Artillery
If you are subject to strict compliance rules (GDPR) and need granular control over network policies, Kubernetes (K8s) is the standard. Specifically, in Norway, isolating workloads to ensure data never leaves the region is critical. K8s NetworkPolicies allow us to lock down pod-to-pod communication.
Here is a restrictive NetworkPolicy you should apply by default to prevent lateral movement if a container is compromised:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
When running K8s, the underlying kernel parameters matter. Most default Linux distributions are tuned for desktop or generic server usage, not high-density container packet switching. We recommend tuning sysctl.conf on your worker nodes:
# Increase connection tracking for high pod density
net.netfilter.nf_conntrack_max = 1048576
# Allow more memory for TCP buffers
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Enable IP forwarding (essential for CNI plugins)
net.ipv4.ip_forward = 1
# Optimize swapiness for K8s (it hates swap)
vm.swappiness = 0
Pro Tip: Never disable swap entirely without understanding the OOM (Out of Memory) Killer behavior. Instead, set swappiness to 0 or 1. Kubernetes schedulers expect to manage memory resources strictly; swapping confuses the metrics server and can lead to cascading pod failures. CoolVDS images come with these kernel optimizations pre-validated.
The Storage Class Dilemma
Stateful workloads in K8s (like Postgres or Redis) require PersistentVolumes (PV). In a cloud environment, these usually hook into network storage (NFS, Ceph, etc.). However, network storage adds latency. For maximum performance, especially for databases, you want local path provisioning on fast NVMe drives.
Here is how you define a high-performance local storage class. This assumes your VPS has dedicated NVMe storage mounted:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-nvme
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Using WaitForFirstConsumer is crucial. It forces the scheduler to place the Pod on the specific node where the NVMe volume exists. Many generic hosts virtualize storage over slow networks, crushing your IOPS. This is why we emphasize bare-metal performance characteristics in our KVM virtualization layer—databases need raw I/O access.
Comparison: Choosing Your Weapon
| Feature | Docker Swarm | Kubernetes (K8s) | CoolVDS Recommendation |
|---|---|---|---|
| Learning Curve | Low (Days) | High (Months) | Start with Swarm if team < 5 |
| Scalability | ~1,000 Nodes | ~5,000+ Nodes | K8s for Enterprise |
| Load Balancing | Built-in Mesh | Ingress / Service Mesh | Use external Load Balancer for H/A |
| Storage | Volume Plugins | CSI / PVC / PV | Must use NVMe |
The Norway Compliance Angle (GDPR & NIX)
Data residency is no longer optional. The Norwegian Datatilsynet is increasingly strict about transfers to third countries. Hosting your orchestration layer on US-owned hyperscalers introduces legal ambiguity regarding the CLOUD Act.
By utilizing local Norwegian infrastructure like CoolVDS, you ensure the physical bits—and the orchestration metadata—remain within national borders. Furthermore, peering at NIX (Norwegian Internet Exchange) ensures that traffic between your users in Oslo and your servers doesn't route through Frankfurt or Stockholm, keeping latency minimal.
Final Verdict
If you are building a massive microservices architecture with 50+ engineers, take the time to build a Kubernetes cluster properly. Invest in the automation, the monitoring stack (Prometheus/Grafana), and the security policies.
But if you just need to run a reliable, redundant web application, don't over-engineer. Docker Swarm on high-performance Linux VPS instances is a battle-tested strategy that is easier to maintain and cheaper to run.
Whichever orchestrator you choose, remember that software cannot fix slow hardware. CPU steal time and I/O wait will kill your cluster's performance regardless of how elegant your YAML is. Ensure your foundation is solid.
Need a low-latency environment to test your cluster? Deploy a high-frequency NVMe instance on CoolVDS today and see the difference dedicated resources make.