LXC vs OpenVZ: The Reality of High-Density Virtualization in 2013
It is 3:00 AM. Your pager is screaming because the MySQL slave on your "High Performance" VPS just hit a wall. You SSH in, check top, and see... nothing. Load average is 0.2. Memory is free. Yet, queries are taking 5 seconds. Welcome to the hell of noisy neighbors and the failcnt trap of shared kernel virtualization.
In the Norwegian hosting market, where latency to NIX (Norwegian Internet Exchange) in Oslo is measured in precious milliseconds, you cannot afford to have your CPU cycles stolen by a teenager running a Minecraft server on the same physical node. Today, we are dissecting the two main contenders for container-based virtualization: the veteran OpenVZ and the rising star LXC (Linux Containers). We will also discuss why, sometimes, you just need a proper KVM hypervisor.
The Contender: OpenVZ and the Bean Counters
OpenVZ has been the standard for cheap VPS hosting for years. It patches the Linux kernel (specifically RHEL/CentOS 6 kernels currently) to allow multiple secure containers. It is mature, but it has a dark side: resource management via User Beancounters (UBC).
Unlike standard system tools, OpenVZ imposes hard limits on resources you didn't even know existed. Have you ever checked /proc/user_beancounters? If you are on a budget VPS, do it now.
# checking for resource exhaustion in OpenVZ
cat /proc/user_beancounters
# Output might look like this:
# uid resource held maxheld barrier limit failcnt
# 101 kmemsize 2640203 2942012 14336000 14790160 0
# lockedpages 0 0 256 256 0
# privvmpages 64210 89420 69632 69632 412
See that failcnt (failure count) of 412 on privvmpages? That is the hypervisor silently killing your memory allocation requests because you hit a barrier, even if free -m says you have RAM available. This is non-transparent and maddening for high-load applications like Magento or heavy Java stacks.
Furthermore, managing OpenVZ usually locks you into an older kernel (2.6.32 branch) because the patch maintenance is heavy. If you want to use the new features in Linux 3.2 or 3.8, you are out of luck.
The Challenger: LXC and Cgroups
LXC is gaining serious traction this year. It uses the mainline Linux kernel's cgroups (control groups) and namespaces. It is cleaner and doesn't require a heavily patched kernel like OpenVZ. There is also some interesting noise coming from PyCon this week about a tool called "Docker" that wraps LXC, but for now, we stick to raw LXC for production stability.
Deploying an LXC container on Ubuntu 12.04 LTS is refreshingly native:
# Install LXC tools
apt-get install lxc
# Create a new container
lxc-create -t ubuntu -n heavy-worker-01
# Start it up
lxc-start -n heavy-worker-01 -d
# Check status
lxc-list
LXC feels more like a standard Linux system. However, isolation is still not absolute. Security exploits that target the kernel can potentially bleed through because, ultimately, you are sharing the host's kernel. If the host kernel panics, everyone goes down.
Orchestration: Managing the Sprawl
Whether you choose OpenVZ or LXC, the real problem arises when you have 50 of them. You can't manually edit /etc/nginx/nginx.conf on 50 nodes. In 2013, if you aren't using configuration management, you are doing it wrong.
We rely heavily on Puppet manifests to keep our containers in line. Here is a snippet of how we ensure our base configuration is identical across all nodes:
# Puppet manifest for base security
class base_security {
package { 'fail2ban':
ensure => installed,
}
service { 'iptables':
ensure => running,
enable => true,
subscribe => File['/etc/sysconfig/iptables'],
}
file { '/etc/ssh/sshd_config':
ensure => present,
source => 'puppet:///modules/ssh/sshd_config',
notify => Service['sshd'],
}
}
Automating the creation of these containers is still a bit of a script-heavy process. We often use simple Bash wrappers around `vzctl` or `lxc-create` combined with kickstart files to bootstrap the Puppet agent. It is not elegant, but it works.
The "CoolVDS" Factor: Why KVM is the Safe Bet
While containers (LXC/OpenVZ) offer high density, they suffer from the "noisy neighbor" problem. If another customer on the node runs a fork bomb, your latency suffers.
This is why at CoolVDS, we prioritize KVM (Kernel-based Virtual Machine). KVM is a Type-1 hypervisor module in the Linux kernel. When you buy a CoolVDS instance, you get:
- A dedicated Kernel: You can load your own modules (IPSet, specialized TCP congestion control).
- Hard Resource Limits: RAM is reserved for you. No
failcnt. - Storage Isolation: We use raw LVM volumes or qcow2 images, meaning I/O heavy neighbors don't kill your seek times.
Pro Tip: For database servers (MySQL/PostgreSQL), always adjust your I/O scheduler. In a virtualized environment, the host handles the physical disk sorting. Inside your VM, set the scheduler to `noop` or `deadline` to reduce CPU overhead.
# Change I/O scheduler to noop for reduced latency in VMs
echo noop > /sys/block/vda/queue/scheduler
Data Sovereignty in Norway
We must also address the legal landscape. Under the Norwegian Personal Data Act (Personopplysningsloven) and the Data Protection Directive 95/46/EC, you are responsible for where your users' data lives. Hosting on cheap US-based clouds introduces latency and legal ambiguity regarding the US Patriot Act.
Keeping your data in Oslo isn't just about complying with Datatilsynet; it is about performance. A ping from downtown Oslo to a datacenter in DigiPlex or Green Mountain is <2ms. To Frankfurt, it's 15-20ms. To the US? 100ms+. For an e-commerce checkout flow, that latency kills conversion rates.
Conclusion
If you are building a dev sandbox, LXC is a fantastic, modern choice. If you are a hosting provider trying to squeeze pennies, OpenVZ is the standard. But if you are a Systems Architect responsible for uptime and consistent performance, you want hardware virtualization.
Don't let shared kernels be the bottleneck of your infrastructure. Experience true isolation and low-latency Norwegian connectivity.
Ready to ditch the failcnt errors? Deploy a KVM-backed instance on CoolVDS today and get root access in under 55 seconds.