The Container Hype Train Has No Brakes (And No Drivers)
It is 2 AM. You are staring at a terminal window. The developer swore that the app "works on my machine," but now that it's deployed across three nodes, service discovery is broken and latency is spiking. Welcome to the reality of Docker in May 2015.
Docker 1.6 was released last month. It is brilliant. It solves the dependency hell that has plagued Linux administration for decades. But while docker run is easy, managing five hundred containers across a cluster is a nightmare that most teams aren't ready for. We are moving from "pet" servers to "cattle," but right now, it feels more like herding cats.
If you are running a serious stack in Norway—whether it's for an oil and gas dashboard or a high-traffic media site—you need an orchestrator. You cannot rely on manual shell scripts anymore. Let's look at the three contenders fighting for the crown this year: the Google prodigy (Kubernetes), the native son (Swarm), and the heavy lifter (Mesos).
The Contenders
1. Kubernetes (The Google Way)
Google has been running containers for a decade using Borg. Now they are giving us a slice of that power with Kubernetes (K8s). It is currently in late beta (v0.18 as of writing), aiming for a 1.0 release this summer.
The Good: It introduces concepts that actually make sense for reliability: Pods (groups of containers), Replication Controllers, and Services. It doesn't just run containers; it keeps them alive. If a node dies, K8s reschedules the pod elsewhere.
The Bad: It is complex. The learning curve is a cliff. Setting up etcd, the API server, the scheduler, and the kubelet manually is prone to error. Networking is also tricky—you will likely need an overlay network like Flannel or Weave to get containers talking across hosts.
2. Docker Swarm (The Native Way)
Docker announced Swarm recently as their native clustering tool. It turns a pool of Docker hosts into a single, virtual Docker host.
The Good: Simplicity. If you know the Docker API, you know Swarm. You can use the standard Docker CLI commands you already know.
The Bad: It is very immature. It lacks the advanced scheduling logic of Kubernetes. It treats the cluster like one big machine, which is fine for simple scaling, but it struggles with complex service dependencies and healing.
3. Apache Mesos + Marathon (The Enterprise Way)
Mesos has been around longer. It abstracts CPU, memory, storage, and other compute resources away from machines, enabling fault-tolerant and elastic distributed systems.
The Good: Proven scale. Twitter and Airbnb use this. It can handle thousands of nodes. It's battle-tested.
The Bad: Overkill for most. Unless you are running hundreds of nodes, the operational overhead of Zookeeper and Mesos masters is heavy.
The Infrastructure Reality Check: KVM vs. OpenVZ
Here is the technical snag that catches everyone off guard. Docker relies heavily on Linux Kernel features (cgroups, namespaces).
Many budget VPS providers in Europe are still selling OpenVZ containers. Do not try to run Docker on OpenVZ. Most OpenVZ hosts run ancient 2.6.32 kernels (RHEL6 era). Docker requires a modern 3.10+ kernel to function correctly without ugly hacks.
Pro Tip: Check your kernel version immediately. Run uname -r. If you see anything less than 3.10, stop. You need a KVM-based VPS.
This is why we built CoolVDS on a pure KVM architecture. When you spin up a CoolVDS instance, you get your own dedicated kernel. You can install Ubuntu 14.04 LTS or CentOS 7, install the latest Docker engine, and it just works. No weird "device mapper" errors. No shared kernel panics.
Data Privacy: The Norwegian Context
We are seeing increasing scrutiny from Datatilsynet regarding where data physically lives. While the US Safe Harbor framework technically exists, the winds are shifting. Relying on US-based cloud giants for your container storage is a risk for sensitive Norwegian data.
Latency is the other factor. If your users are in Oslo, routing your API calls through a data center in Frankfurt adds 20-30ms of round-trip time. In a microservices architecture, where one user request might trigger ten internal service calls, that latency compounds. 30ms becomes 300ms. Your site feels sluggish.
Benchmark: Ping to NIX (Norwegian Internet Exchange)
| Provider Location | Avg Latency |
|---|---|
| US East Coast | ~95 ms |
| Central Europe (AMS/FRA) | ~25 ms |
| CoolVDS (Oslo) | < 2 ms |
Configuration snippet: Nginx Load Balancer for Docker
If you aren't ready for Kubernetes yet, you can use a simple Nginx reverse proxy to balance between containers. Here is a standard config we use on CoolVDS instances to route traffic to backend containers running on ports 8080 and 8081:
upstream myapp {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
}
server {
listen 80;
server_name app.example.no;
location / {
proxy_pass http://myapp;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
The Verdict
If you are a small team today in 2015, start with Docker Swarm or even just Docker Compose on a single beefy KVM node. It allows you to move fast without drowning in YAML files.
If you are building the next Spotify, start learning Kubernetes. It is the future. The beta tag is scary, but the architecture is sound.
Regardless of the orchestrator, the foundation matters. Containers are only as stable as the kernel they run on. Don't let IOwait or noisy neighbors kill your performance. Deploy a high-performance KVM instance on CoolVDS today, and give your containers the home they deserve.