The Container Wars: OpenVZ vs. LXC vs. The New Docker – A Survival Guide for Norwegian Sysadmins
Let’s be honest: the last month has changed everything. Since the Snowden leaks broke in June, every CTO in Oslo is suddenly asking me about data sovereignty. The illusion of the "safe US cloud" is shattered. But while we scramble to move data back behind the borders of Datatilsynet jurisdiction, we are ignoring a technical time bomb sitting right in our infrastructure: Virtualization architecture.
Most "managed hosting" providers in Europe are selling you a lie. They sell you "RAM" and "Cores," but what they are actually giving you is a slice of a shared kernel using OpenVZ, where a noisy neighbor running a heavy MySQL query can tank your latency. I've spent the last week debugging a Magento cluster that was crawling, not because of the PHP code, but because the "Dedicated VPS" it was hosted on had a CPU Steal percentage of 25%.
Today, we are going to look at the state of containerization in mid-2013. We will compare the incumbent (OpenVZ), the raw power (LXC), and the noisy new kid on the block (Docker), and discuss how to orchestrate them without losing your mind—or your uptime.
The Incumbent: OpenVZ and the "Bean Counter" Problem
If you buy a cheap VPS in Norway today, 90% of the time you are getting an OpenVZ container. OpenVZ is operating system-level virtualization. It uses a single patched Linux kernel (usually 2.6.32) and slices it up into "Containers" (CTs).
For the hoster, this is paradise. They can oversell CPU and RAM aggressively because not every customer uses their limit at once. For you, the "Battle-Hardened DevOps," it can be a nightmare.
The Reality of Shared Kernels
Since you share the kernel, you cannot load your own kernel modules. Need a specific iptables module for a VPN? You have to beg your host to enable it on the hardware node. Furthermore, resources are managed by `User Beancounters` (UBC). If you hit a limit on `numtcpsock` or `kmemsize`, your application crashes, often without a clear OOM error in your logs.
Here is how you check if your host is choking you via UBC. Run this inside your VPS:
cat /proc/user_beancounters
If the `failcnt` column is anything other than zero, your provider is throttling you. I recently moved a client to CoolVDS specifically because they offer KVM (Kernel-based Virtual Machine) instead of OpenVZ. KVM gives you a real kernel. No beancounters. No shared memory limits.
The Native contender: LXC (Linux Containers)
LXC is what OpenVZ wants to be when it grows up, but standardized into the mainline Linux kernel. It leverages cgroups (control groups) for resource limitation and namespaces for isolation. It’s brilliant because it’s native. You don't need a hacked kernel.
However, "orchestrating" LXC right now is manual labor. You are essentially building a chroot jail on steroids. Creating a container looks like this:
# Install LXC on Ubuntu 12.04 LTS
sudo apt-get install lxc
# Create a container named 'web01'
sudo lxc-create -t ubuntu -n web01
# Start it up
sudo lxc-start -n web01 -d
This works, but networking is painful. You usually have to bridge your `eth0` manually and mess with `iptables` NAT rules to get traffic in. It’s not something I want to do at 3 AM when a node fails.
The Disruptor: Docker (v0.5)
Then there is Docker. It was released just a few months ago, and honestly, the hype is deafening. I’ve been testing version 0.5.x, and while I would never put this in production for a bank yet, the concept is revolutionary.
Docker is currently a wrapper around LXC. It uses AUFS (Another Union File System) to layer file systems. This means I can commit a change to my application image and only ship the diff. For developers, this is magic.
Here is a basic example of how we are experimenting with Docker for a stateless Nginx frontend:
# Dockerfile
FROM ubuntu:12.04
MAINTAINER OpsTeam
RUN apt-get update && apt-get install -y nginx
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD ["/usr/sbin/nginx"]
Building and running it:
docker build -t my-nginx .
docker run -d -p 80:80 my-nginx
The Catch with Docker in 2013
It is unstable. I've seen the daemon die and take all containers with it. Also, networking is still "host-only" or NAT-based, making cross-container communication difficult without complex port mapping. There is no "orchestration" layer yet—if the host dies, your container dies. You still need Puppet or Chef to manage the host itself.
The Architecture of Trust: The "Russian Doll" Strategy
So, how do we solve this? We need the isolation of a VM but the deployment speed of containers. The answer is the "Russian Doll" strategy, and it’s the reference architecture we use at CoolVDS.
Run Containers INSIDE KVM.
Do not run LXC or Docker on bare metal shared with other customers (like typical shared hosting). The isolation isn't strong enough. A kernel panic in one container can theoretically affect the host. Instead, provision a robust KVM Virtual Private Server.
- Bottom Layer (The Hardware): CoolVDS enterprise hardware in Oslo.
- Middle Layer (The Hypervisor): KVM. This gives you a dedicated kernel and reserved RAM. No noisy neighbors stealing CPU cycles.
- Top Layer (Your Code): Inside your KVM instance, you use LXC or Docker to segment your apps (Web, App, DB).
This setup allows you to snapshot the entire KVM for backups, while using Docker/LXC to quickly spin up new app versions. It’s the only way to guarantee consistent I/O performance for your databases.
Pro Tip: If you are running MySQL 5.5 on a KVM slice, ensure you change the I/O scheduler from `cfq` to `deadline` or `noop` inside the guest VM. Since the host (CoolVDS) handles the physical disk scheduling, your VM shouldn't try to re-order requests.
echo noop > /sys/block/vda/queue/scheduler
Comparison: 2013 Virtualization Landscape
| Feature | OpenVZ | LXC / Docker | KVM (CoolVDS) |
|---|---|---|---|
| Kernel | Shared (2.6.32) | Shared with Host | Dedicated |
| Isolation | Medium (Beancounters) | Low (Namespaces) | High (Hardware Virt) |
| Performance | Native | Native | Near-Native (Virtio) |
| Data Privacy | Shared Memory Risks | Host Dependent | Full Segregation |
Orchestration: The Missing Link
Since tools like "Swarm" or specialized container managers don't really exist yet, "Orchestration" today means Configuration Management. Whether you choose LXC or pure KVM, you cannot manage this manually.
We rely on Puppet to enforce state. Here is a snippet of a Puppet manifest we use to ensure our KVM nodes are ready for high-load traffic. This ensures that `sysctl` settings are tuned for the Nordic high-speed fiber infrastructure:
sysctl { 'net.ipv4.tcp_tw_reuse':
value => '1',
}
sysctl { 'net.core.somaxconn':
value => '1024',
}
# Protect against SYN floods
sysctl { 'net.ipv4.tcp_syncookies':
value => '1',
}
Combining a KVM foundation with Puppet management gives you a predictable infrastructure. You aren't hoping your neighbor doesn't launch a fork bomb; you are guaranteed your resources.
The Verdict
If you are running a hobby blog, OpenVZ is fine. But if you are handling customer data under Norwegian law, or running a business-critical application where latency equates to lost revenue, you cannot rely on shared kernels.
Technology is moving fast. Docker is promising, but it needs a stable home. That home is a KVM-based VPS. It provides the legal and technical firewalls you need in a post-Snowden world.
Don't let "steal time" kill your application's responsiveness. Build on a foundation that respects your resources.
Ready to take control of your stack? Deploy a pure KVM instance with NVMe-class speeds on CoolVDS today and experience the difference of dedicated resources.