Taming the Container Beast: Docker 1.0 vs. The Old Guard
It is July 2014. If you have been following the mailing lists or hanging out on IRC #devops, you know the noise level is deafening. Docker just hit version 1.0 last month. The hype train has left the station, and every developer with a MacBook Air suddenly thinks they are a systems architect. They hand you a Dockerfile and expect it to run flawlessly in production alongside your legacy RHEL 5 boxes.
I have spent the last week migrating a high-traffic Magento cluster for a client in Oslo from bare metal to a virtualized environment. The developers wanted Docker. The management wanted stability. I just wanted to sleep through the night without Nagios screaming at me. Here is the hard truth about container orchestration and virtualization right now: it is the Wild West.
The "Container" Confusion: OpenVZ vs. Docker vs. LXC
First, let's clear up the terminology that is confusing half the industry. We are seeing three distinct technologies fighting for dominance in the Norwegian hosting market.
1. OpenVZ (The Legacy Budget Choice)
OpenVZ has been the bread and butter of cheap VPS hosting for years. It uses a shared kernel. It is efficient, but it is a trap. Since you are sharing the kernel with every other tenant on the host node, you cannot load your own kernel modules. Try running Docker inside an OpenVZ container? Good luck. You will hit wall after wall with `cgroups` compatibility. It is widely used by budget hosts, but for serious infrastructure, it is a dead end.
2. LXC (Linux Containers)
This is the grandfather. It is stable, it works, but the tooling is raw. You are essentially building your own jail cells with shell scripts.
3. Docker (The New Standard)
Docker wraps LXC (and now `libcontainer` as of v0.9) in a usable API. It introduces the concept of immutable infrastructure. But Docker 1.0 is not magic. It is just processes. And processes need a solid kernel to schedule them.
Orchestration in 2014: Sticking it Together with Duct Tape
The biggest pain point right now isn't running a container; it's managing fifty of them. We don't have a standard "scheduler" yet. Google mentioned something called Kubernetes last month at GlueCon, but that is alpha software. For now, we have to be creative.
Option A: The "Fig" Approach (Dev Environments)
For single-host orchestration, a tool called Fig is showing promise. It lets you describe your app in a YAML file. It is great for dev, but I wouldn't trust it to handle failover on a production node in a data center yet.
web:
build: .
command: python app.py
ports:
- "5000:5000"
links:
- redis
redis:
image: redis
Option B: Configuration Management (Chef/Puppet)
This is the "Battle-Hardened" way. We treat Docker containers just like services. I use Puppet to ensure the container is running. It is verbose, but it integrates with our existing reporting.
Here is a snippet from a Puppet manifest I used last week to ensure a Redis container stays alive:
docker::run { 'redis':
image => 'redis:2.8',
ports => ['6379:6379'],
volumes => ['/var/lib/redis:/data'],
require => Class['docker'],
}
The problem? Puppet runs periodically. If the container crashes 10 seconds after the Puppet run, you are down until the next run (usually 30 minutes). That is unacceptable for a high-availability SLA.
Option C: CoreOS and Fleet
This is the bleeding edge. CoreOS is an OS designed for this stuff, using `fleet` to schedule containers across a cluster using `systemd` units. It is brilliant, but it requires you to re-architect your entire infrastructure. If you are just trying to host a monolithic PHP app, it is overkill.
The Infrastructure Reality: Why KVM is Non-Negotiable
This brings me to the most critical decision: The Host OS. You cannot run Docker properly on a budget OpenVZ VPS. You need your own kernel to manage the `iptables` NAT rules and `cgroups` resource limits that Docker relies on.
This is where CoolVDS becomes the logical choice for us. Unlike the budget providers crowding the market, CoolVDS uses KVM (Kernel-based Virtual Machine) virtualization. KVM gives you a dedicated kernel.
Pro Tip: When running Docker on KVM, check your storage driver. By default, it might fall back to `devicemapper` loop-lvm mode, which is slow as molasses for I/O heavy apps. Configure it to use direct LVM volumes or Btrfs if you are feeling brave.
Optimizing Network Latency
In Norway, we have excellent connectivity via NIX, but latency still kills conversion rates. If you are using Docker's default bridge networking, you are introducing a NAT layer. For high-throughput internal traffic (like PHP talking to MySQL), this overhead adds up.
On a CoolVDS KVM instance, you can tweak the kernel `sysctl` settings to optimize this packet forwarding:
# /etc/sysctl.conf
# Critical for allowing Docker containers to talk to the outside world efficiently
net.ipv4.ip_forward = 1
# Increase the range of ephemeral ports for high connection rates
net.ipv4.ip_local_port_range = 1024 65000
# Decrease TIME_WAIT state to free up sockets faster
net.ipv4.tcp_fin_timeout = 30
Data Sovereignty and The "Datatilsynet" Factor
Technical architecture doesn't exist in a vacuum. With the Snowden revelations last year, Norwegian companies are rightfully paranoid about where their data lives. The Personopplysningsloven (Personal Data Act) is strict.
If you rely on US-based cloud giants, you are navigating a legal minefield regarding Safe Harbor. Hosting on CoolVDS ensures your data stays on physical hardware within the jurisdiction you expect. You know exactly where the bits are spinning. When the Data Inspectorate (Datatilsynet) comes knocking, you want to be able to point to a server rack in Oslo or elsewhere in Europe, not a nebulous "availability zone" controlled by a US corporation.
Summary: The 2014 Roadmap
We are at an inflection point. The tools are young, but the advantages of isolation and portability are too good to ignore. Here is my recommendation for the rest of 2014:
- Stop using OpenVZ for anything other than a personal sandbox. It cannot handle modern containerization.
- Adopt KVM. It provides the kernel isolation required for Docker stability. CoolVDS offers this at a price point that makes bare metal hard to justify.
- Start small with Docker. Use it for your stateless web workers. Keep your database on the host OS or a dedicated CoolVDS instance until the storage plugins mature.
The future is containerized, but today, stability is still king. Build on a foundation that won't collapse under the weight of the hype.
Ready to test your Docker stack on a real kernel? Spin up a KVM instance on CoolVDS in under 55 seconds and see the difference raw isolation makes.