Console Login

Container Orchestration in 2020: Why Kubernetes Isn't Always the Answer for Nordic Deployments

Container Orchestration in 2020: Why Kubernetes Isn't Always the Answer

Let’s be honest for a second: 90% of the companies currently migrating to Kubernetes (K8s) don't actually have Google-scale problems. They have resume-driven development problems.

I’ve spent the last six months cleaning up "cloud-native" disasters across Oslo and Stockholm. I see the same pattern everywhere: a small dev team managing a three-node cluster that consumes more man-hours in maintenance than the application it hosts. With the Schrems II ruling from the ECJ hitting us in July, invalidating the Privacy Shield, the game has changed. You can no longer just blindly dump customer data into a US-managed control plane and hope for the best. Data sovereignty in Norway is now a technical requirement, not just a legal one.

Today, we are stripping away the marketing fluff. We will look at the three viable contenders for your infrastructure in late 2020: the juggernaut Kubernetes, the "dead-but-not-dead" Docker Swarm, and the pragmatic HashiCorp Nomad. And we’re going to talk about the one thing software architects forget: the metal underneath.

The Contenders: A Technical Reality Check

1. Kubernetes (The 800lb Gorilla)

With the release of version 1.19 last month, Kubernetes has officially stabilized Ingress. That's great. But the operational overhead remains massive. K8s is not a deployment tool; it is a framework for building deployment platforms.

If you are running K8s, you aren't just managing containers. You are managing a distributed database (etcd), a software-defined network (CNI), and a complex reconciliation loop. If your etcd latency spikes, your cluster falls apart. This is where most generic VPS providers fail—they oversell storage I/O, and when a neighbor spins up a backup, your API server starts timing out.

Pro Tip: Never run a production K8s cluster without setting resource limits. The `OOMKiller` (Out of Memory Killer) is ruthless. Always define your `requests` and `limits` in the deployment YAML.

2. Docker Swarm (The "Just Works" Option)

Since Mirantis acquired Docker Enterprise last year, people have been writing Swarm's obituary. Ignore them. For a team of five developers serving the Norwegian market, Swarm is superior to K8s. Why? Because the time-to-hello-world is five minutes, not five days.

Swarm lacks the rich ecosystem of Helm charts and Operators, but it is built into the Docker engine you already have. There is no extra binary to install, no `etcd` cluster to baby (it's embedded).

3. HashiCorp Nomad (The Unix Philosophy)

Nomad is the sniper rifle to Kubernetes' shotgun. It does one thing: scheduling. It doesn't care about networking (use Consul) or secrets (use Vault). It can schedule Docker containers, but also raw binaries, Java JARs, and QEMU virtual machines. If you have legacy monoliths that can't be containerized yet, Nomad is your bridge.

The Invisible Bottleneck: Storage Latency

Here is the hard truth: Your orchestrator cannot fix slow I/O.

I recently debugged a Magento cluster running on K8s where the pods kept crashing. The dev team blamed the scheduler. The logs showed nothing useful. We ran a trace and found that the MySQL container was locking up because the underlying storage couldn't handle the IOPS during re-indexing.

In a containerized environment, latency kills. Whether it's `etcd` syncing state or a database flushing to disk, you need fast storage. This is why we built CoolVDS exclusively on NVMe arrays. We don't use spinning rust, and we isolate I/O paths so your neighbors can't steal your throughput.

Here is how you check if your current host is lying to you about "SSD" speeds. Run this on your server:

# Install fio if you haven't (Debian/Ubuntu)
apt-get update && apt-get install -y fio

# Run a random write test mimicking a busy database
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=512M --numjobs=1 --runtime=60 --group_reporting

On a standard SATA SSD VPS, you might see 3,000 IOPS. On a CoolVDS NVMe instance, you should be seeing significantly higher numbers, often saturating the interface limits. If you are running a K8s control plane, you need that low latency for etcd stability.

Configuration Deep Dive

Let's look at the complexity difference. Here is the code required to deploy a simple Nginx service with a persistent volume.

The Kubernetes Way (Verbose)

You need a PersistentVolumeClaim, a Deployment, and a Service. This is just the Deployment part:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: nginx-storage
      volumes:
      - name: nginx-storage
        persistentVolumeClaim:
          claimName: nginx-pvc

The Docker Swarm Way (Simple)

Everything fits in one familiar `docker-compose.yml` file:

version: '3.8'
services:
  web:
    image: nginx:1.19
    deploy:
      replicas: 3
    volumes:
      - web-data:/usr/share/nginx/html
    ports:
      - "80:80"

volumes:
  web-data:

To deploy this on Swarm, you run a single command:

docker stack deploy -c docker-compose.yml mywebapp

The Legal & Infrastructure Reality (Schrems II)

Technical architecture does not exist in a vacuum. With the invalidation of the Privacy Shield in July 2020, relying on US-based cloud providers (AWS, Azure, GCP) for storing European user data has become a compliance nightmare. The Norwegian Datatilsynet is watching closely.

If you host on a CoolVDS KVM instance in our Oslo datacenter, your data stays in Norway. You aren't subject to the US CLOUD Act in the same direct way as using a US-owned managed service. For many of my clients in finance and healthcare, this is the deciding factor. They use open-source tools (K8s/Swarm) on top of sovereign Norwegian iron.

Latency Matters: The Oslo Connection

If your user base is in Norway, why route traffic through Frankfurt? Speed is a feature. The round-trip time (RTT) from a fiber connection in Oslo to a datacenter in Frankfurt is ~25-30ms. To a local Oslo datacenter (like ours connected to NIX), it is <2ms.

# Check your latency to the Norwegian Internet Exchange
ping -c 5 oslo.ix.no

Comparison: Choosing Your Path

Feature Kubernetes Docker Swarm Nomad
Learning Curve High (Steep) Low (Easy) Medium
Maintenance Heavy (Requires dedicated ops) Low Low/Medium
Storage Sensitivity High (etcd needs NVMe) Medium Low
Best Use Case Complex Microservices Small/Medium Teams Mixed Workloads

Conclusion

If you are a team of three developers building a CRUD app for a Norwegian client, do not install Kubernetes. You will spend more time debugging YAML indentation than writing code. Use Docker Swarm or a simple Nomad setup on a robust VPS.

However, if you must use Kubernetes for its ecosystem, ensure your foundation is solid. K8s on cheap, oversold hardware is a suicide mission. You need dedicated CPU cycles and NVMe storage that doesn't choke when you run kubectl apply.

Stop guessing about performance. Spin up a CoolVDS KVM instance in Oslo today. With our NVMe storage and 10Gbps uplinks, your containers will run the way they were designed to: instantly.