LXC vs. OpenVZ vs. KVM: The Truth About "Cloud" Isolation & Performance in 2013
Let’s cut through the marketing noise that has infected the hosting industry lately. Everyone is selling "Cloud VPS" these days, but half the time, they are selling you a glorified chroot jail on an overloaded node. I have spent the last three weeks debugging a client's "high availability" cluster that kept crashing because of a noisy neighbor on a shared kernel. It wasn't their code; it was the virtualization technology.
If you are a sysadmin in Norway trying to host serious infrastructure, you need to understand exactly what is happening at the kernel level. Today, we are going to look at the differences between container-based virtualization (OpenVZ, LXC) and full hardware virtualization (KVM), and why the choice determines whether your MySQL database survives a traffic spike or creates a kernel panic.
The Container Trap: OpenVZ and the "Burst RAM" Myth
OpenVZ has been the industry standard for budget VPS hosting for years. It uses a shared kernel. This means every VPS (or "container") on the host machine relies on the exact same Linux kernel version. It is lightweight, fast to boot, and allows hosters to oversell resources aggressively.
In an OpenVZ environment, you don't really own your RAM. You are given "Guaranteed" and "Burst" RAM. When your neighbor’s WordPress site gets hit by a botnet, the host node’s kernel resource scheduler gets hammered. If you want to see if you are trapped in an OpenVZ cage, run this command:
cat /proc/user_beancounters
If you see output like this, you are on OpenVZ:
Version: 2.5
uid resource held maxheld barrier limit failcnt
101: kmemsize 2641886 2940922 14372700 14790164 0
lockedpages 0 0 256 256 0
privvmpages 69424 73848 262144 272144 0
shmpages 656 672 21504 21504 0
The problem? failcnt (fail count). If that number is anything other than zero on privvmpages, your applications are being allocated memory that doesn't physically exist, and the kernel is killing your processes to save the host. For a production database, this is unacceptable.
The New Contender: LXC (Linux Containers)
LXC is gaining traction as the "upstream" replacement for OpenVZ. It leverages cgroups and namespaces native to the mainline Linux kernel (unlike OpenVZ, which requires a patched kernel). It is what the folks at DotCloud are using for their new open-source project, "Docker," which launched just last month.
LXC is brilliant for development environments because you can spin up a lightweight distro in seconds:
# Create an LXC container on Ubuntu 12.04
sudo lxc-create -t ubuntu -n my-web-server
# Start it up
sudo lxc-start -n my-web-server
However, LXC still shares the kernel. If you need to load a specific kernel module for a VPN (like TUN/TAP device) or tweak strict TCP parameters in sysctl.conf for high-frequency trading or optimized I/O, you might hit a wall. The host node's security policies often prevent containers from modifying kernel space.
The Authority Solution: KVM (Kernel-based Virtual Machine)
This is where we draw the line between "hobby hosting" and "enterprise infrastructure." KVM allows the kernel to act as a hypervisor. Each guest VM has its own private kernel. You can run CentOS 6 on the host and Debian 6, FreeBSD, or even Windows on the guest.
Why KVM is Mandatory for High-Performance Workloads
In a recent project migrating a Magento store from a generic German host to a KVM setup in Oslo, we saw page load times drop from 2.4s to 0.8s. Why? I/O Isolation.
With KVM, we can use virtio drivers to pass storage requests directly to the hardware with minimal overhead. Furthermore, we can tune the guest kernel specifically for the workload without begging the hosting provider to change a global setting.
Here is how we tune the TCP stack on our CoolVDS KVM instances to handle high concurrency (something often blocked in OpenVZ):
# /etc/sysctl.conf optimizations for KVM instances
# Increase system file descriptor limit
fs.file-max = 2097152
# Improve TCP connection handling
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_syncookies = 1
# Allow more local port range for outgoing connections (proxies)
net.ipv4.ip_local_port_range = 1024 65535
# Decrease TIME_WAIT seconds
net.ipv4.tcp_fin_timeout = 15
Pro Tip: Always use the Deadline or Noop I/O scheduler inside your KVM guest if your host uses an SSD/RAID array. The host handles the physical sorting; your VM shouldn't waste cycles doing it twice.
# Check current scheduler
cat /sys/block/vda/queue/scheduler
# [cfq] deadline noop
# Change to noop on runtime
echo noop > /sys/block/vda/queue/scheduler
Orchestration in 2013: Puppet vs. Shell Scripts
Since we don't have magical "cloud balancers" that auto-scale yet, managing these KVM instances requires discipline. Relying on manual SSH loops is a recipe for disaster. We recommend using Puppet or Chef to orchestrate the configuration of your nodes.
For a standard CoolVDS deployment, we use a Puppet manifest to ensure every web node has the exact same Nginx configuration. This guarantees consistency across your cluster.
# site.pp snippet
node 'web-node-01' {
package { 'nginx':
ensure => installed,
}
service { 'nginx':
ensure => running,
enable => true,
require => Package['nginx'],
}
file { '/etc/nginx/nginx.conf':
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
source => 'puppet:///modules/nginx/nginx.conf',
notify => Service['nginx'],
}
}
The Norwegian Context: Data Sovereignty & Latency
Beyond the technical specs, there is a legal reality we must face in 2013. With the Patriot Act in the US allowing broad data access, and the EU Data Protection Directive requiring strict handling of personal information, where your data sits physically matters.
Hosting in generic clouds often means your data bounces through Frankfurt or London. By using local Norwegian infrastructure, you utilize the NIX (Norwegian Internet Exchange). This keeps your traffic within the country, ensuring compliance with Personopplysningsloven (Personal Data Act) and satisfying the Datatilsynet requirements. Plus, the latency from Oslo to Bergen is roughly 8-12ms on our network, compared to 40ms+ when routing through Germany.
Conclusion: Stop Sharing Your Kernel
OpenVZ and LXC have their place—mostly for testing or ultra-low-budget personal sites. But if your business relies on uptime, you cannot afford to have your database killed because a neighbor on the same physical server decided to run a Bitcoin miner.
At CoolVDS, we don't mess around with shared kernels. We provide pure KVM instances backed by enterprise storage. You get your own kernel, your own dedicated memory, and the stability your users demand.
Ready to own your resources? Deploy a high-performance KVM instance in Oslo today.