The Container Hype Train Has No Brakes
It has been exactly one month since Docker released version 1.0 at DockerCon. The hype is deafening. Developers are throwing away their Vagrantfiles and shouting about "shipping binaries," while operations teams are left staring at a terminal wondering how to manage these black boxes in production. I’ve seen this movie before. It starts with "It works on my machine" and ends with a 3:00 AM page because no one knows which host the database container drifted to.
Here is the brutal truth: docker run is not a deployment strategy. If you are serious about moving from monolithic architectures to decoupled services in 2014, you need an orchestration layer. You need something that decides where containers live, ensures they stay alive, and helps them find each other.
In this analysis, we are going to look at the current contenders for managing container clusters: the distributed init system fleet (by CoreOS), the enterprise heavyweight Apache Mesos, and a quiet new contender from Google called Kubernetes. We will also discuss why your choice of underlying VPS virtualization (KVM vs. OpenVZ) matters more than the tool you pick.
1. The Lightweight Contender: CoreOS & fleet
If you have been following the CoreOS project, you know they are betting big on a stripped-down Linux distribution designed solely for massive server deployments. Their secret weapon is fleet. Think of fleet as systemd, but distributed across your entire cluster.
Instead of SSHing into web-01 to start a service, you submit a unit file to fleet, and it schedules the job on the least loaded machine. It relies heavily on etcd, a distributed key-value store for shared configuration.
Here is what a typical fleet unit file looks like for a Dockerized Nginx service. Note the X-Fleet section at the bottom—that is where the magic happens:
[Unit]
Description=My Nginx Container
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx-app
ExecStartPre=-/usr/bin/docker rm nginx-app
ExecStartPre=/usr/bin/docker pull coolvds/nginx-custom:1.6
ExecStart=/usr/bin/docker run --name nginx-app -p 80:80 coolvds/nginx-custom:1.6
ExecStop=/usr/bin/docker stop nginx-app
[X-Fleet]
Conflicts=nginx-app.service
The Conflicts directive ensures two Nginx containers never land on the same host, providing high availability. This is elegant, simpler than Chef, and fits the Docker ethos perfectly. However, fleet is still low-level. You are dealing with raw systemd units. There is no "auto-scaling" based on CPU load yet; you have to script that yourself.
2. The Heavyweight: Apache Mesos + Marathon
If you are running Twitter-scale traffic, fleet might feel like a toy. Enter Apache Mesos. Mesos abstracts CPU, memory, storage, and other compute resources away from machines, treating your entire datacenter as a single pool of resources. On top of Mesos, you run a framework like Marathon to orchestrate long-running applications (like web servers).
Mesos is powerful, but the learning curve is a vertical wall. Setting up Zookeeper (required for coordination) and the Mesos masters/slaves is a project in itself. However, once it runs, it is bulletproof. It handles failure recovery automatically.
Here is a snippet of a JSON payload you would post to the Marathon API to deploy a Docker container:
{
"id": "basic-0",
"cmd": "python3 -m http.server 8080",
"cpus": 0.5,
"mem": 128.0,
"instances": 2,
"container": {
"type": "DOCKER",
"docker": {
"image": "python:3",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 8080, "hostPort": 0, "servicePort": 9000, "protocol": "tcp" }
]
}
}
}
Is this overkill for a simple Magento shop or a startup SaaS? Probably. The operational overhead of Zookeeper alone usually scares off teams smaller than 10 engineers.
3. The Wildcard: Google Kubernetes
Just last month, Google dropped the source code for Kubernetes. It is incredibly early days (v0.x alpha), but the concepts are fascinating. They are introducing terms like "Pods" (groups of containers) and "Replication Controllers." It is trying to bring the internal Borg system Google uses to the masses.
It is currently too unstable for production workloads—I wouldn't bet my SLA on it just yet—but keep an eye on this. The declarative syntax is cleaner than fleet's systemd approach.
The Infrastructure Layer: Why KVM is Non-Negotiable
This is where many DevOps engineers fail. They obsess over the orchestration tool but ignore the virtualization layer underneath.
Do not try to run Docker on OpenVZ.
OpenVZ relies on a shared kernel. Docker relies on kernel features like cgroups and namespaces. When you run Docker inside OpenVZ, you are essentially trying to nest containers. You will run into issues with older kernel versions (Docker prefers Linux 3.8+), and you cannot load kernel modules required for advanced networking. You will see cryptic errors like:
FATA[0000] Error mounting '/sys/fs/cgroup': device or resource busy
For container workloads, you need KVM (Kernel-based Virtual Machine). KVM gives you a dedicated kernel. It acts exactly like bare metal.
Pro Tip: At CoolVDS, all our instances are pure KVM on top of high-performance hardware. We don't oversell CPU cycles. When you spin up a CoolVDS instance, you can install the latest Ubuntu 14.04 LTS, install Docker 1.0, and it just works because you have full control over your own kernel modules.
Performance & Latency: The Norwegian Context
If your target audience is in Oslo or the broader Nordics, you cannot ignore the physical laws of latency. Orchestration adds overhead. Overlay networks (like Flannel or Weave) add overhead. If you host your cluster in US-East while your users are in Bergen, you are adding 100ms of round-trip time before the packet even hits your Nginx container.
Furthermore, we must respect the Norwegian Personal Data Act (Personopplysningsloven). Data residency is becoming a massive topic of discussion with the Datatilsynet (Data Protection Authority). Keeping your customer data on servers physically located in Norway or the EEA is the safest bet for compliance. Hosting on CoolVDS ensures your data stays within the jurisdiction, protecting you from legal headaches down the road.
Configuration Checklist for Production
Before you go live with your Docker cluster on CoolVDS, verify these settings in your host /etc/default/docker (on Ubuntu 14.04):
# Use the native driver, avoid device-mapper if possible on generic kernels
DOCKER_OPTS="--storage-driver=aufs"
# Enable Ipv4 forwarding for container networking
# check /etc/sysctl.conf
net.ipv4.ip_forward=1
And ensure you tune your host's file descriptor limits. Containers can spawn thousands of processes.
# /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
Final Verdict
If you need to ship today, look at CoreOS and fleet. It strikes the right balance between "manual scripting" and "enterprise complexity." If you are building the next Facebook, invest in Mesos. But whatever you choose, build it on a solid foundation. Docker needs a modern kernel and fast I/O.
Ready to build your cluster? Deploy a KVM instance on CoolVDS today. We support custom ISOs and offer the low-latency NVMe performance your containers are starving for. Get started in under 55 seconds.