LXC vs. OpenVZ: The Reality of Container Management in 2013
Let’s be honest: the promise of "near-native performance" is the oldest sales pitch in the hosting book. If you are running a high-traffic Magento store or a latency-sensitive API endpoint in Oslo, you don't care about marketing fluff. You care about WAIT_IO, interrupt handling, and why your MySQL InnoDB buffer pool just got swapped to disk.
As we settle into 2013, the debate in the sysadmin trenches isn't just about OS choice anymore; it's about isolation methodology. We are seeing a massive shift from traditional heavy virtualization towards lighter containerization technologies like OpenVZ and the emerging LXC (Linux Containers). But for mission-critical infrastructure, is lighter actually better?
The OpenVZ Trap: Living in the Beancounters
OpenVZ has been the bread and butter of the VPS industry for years. It allows us to stack hundreds of containers on a single physical node. From a TCO (Total Cost of Ownership) perspective, it looks brilliant. But from a technical standpoint, it is a minefield of resource contention.
I recently audited a setup for a client in Trondheim whose web servers were intermittently crashing every Tuesday at 03:00. The logs showed nothing—no OOM killer, no panic. The culprit? User Beancounters.
OpenVZ doesn't just limit RAM; it limits kernel memory structures. Check your /proc/user_beancounters. If you see the failcnt incrementing on kmemsize or numtcpsock, your container is silently dropping packets or denying socket creation because the host node is constrained, not just your instance.
# Checking fail counts on a struggling OpenVZ container
cat /proc/user_beancounters | grep -v " 0$"
uid resource held maxheld barrier limit failcnt
101 kmemsize 2845600 3948200 14336000 14790160 432
101 dcachesize 0 0 3409920 3624960 0
101 numtcpsock 120 120 360 360 55
That 55 in the fail column means 55 TCP connections were dropped. Not because of a firewall, but because of an artificial limit defined in the host's vz.conf. This is unacceptable for production environments.
LXC: The Future, But Is It Ready?
LXC is gaining traction because it uses mainline Linux kernel features (cgroups and namespaces) rather than the patched kernel required by OpenVZ. This means you aren't stuck on an ancient RHEL 6 kernel (2.6.32). You can run modern kernels.
However, LXC in early 2013 is still... raw. The tooling requires significant manual intervention. Creating a container isn't just a one-liner; you often need to manually bridge your networking interfaces.
Here is a typical network setup required in /etc/network/interfaces on the host just to get LXC talking to the outside world correctly without NAT mess:
auto br0
iface br0 inet static
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.1
bridge_ports eth0
bridge_fd 0
bridge_maxwait 0
Then you define your container configuration. Unlike the rigid OpenVZ templates, LXC offers flexibility, but it lacks the robust resource quotas that OpenVZ's beancounters enforce. This means a "noisy neighbor" in an LXC environment can still trash the disk I/O for everyone else if the cgroups blkio parameters aren't tuned to perfection.
Orchestration in 2013: Puppet is King
Whether you choose OpenVZ or LXC, managing them individually is suicide. You need automation. Right now, Puppet 3 is the standard for maintaining sanity across these environments. We don't have magical self-healing clusters yet, but we have MCollective.
Using MCollective, we can blast commands across 50 nodes simultaneously to check status or deploy patches. If you are managing VPS hosting or internal dev clusters without this, you are wasting billable hours.
# Using MCollective to check disk usage across all web nodes
mco rpc rpcutil get_fact fact=block_device_sda_size -I /web/
However, automation cannot fix the underlying kernel sharing issue. If the kernel panics, everyone goes down. This brings us to the architectural decision we made at CoolVDS.
The KVM Advantage: Why We Don't Share
While containerization is exciting for density, KVM (Kernel-based Virtual Machine) remains the only logical choice for clients who need guaranteed performance. Unlike containers, KVM provides full hardware virtualization. Your memory is your memory. Your kernel is your kernel.
If you need to tune sysctl.conf for high-concurrency connections, you can do it without begging your hosting provider:
# Optimizing the network stack in your own KVM instance
# /etc/sysctl.conf
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10000 65000
net.core.somaxconn = 65535
Try setting `tcp_tw_reuse` inside a locked-down OpenVZ container. You can't. The host denies write access to `/proc/sys/net`.
Pro Tip: If you are hosting data subject to the Norwegian Personal Data Act (Personopplysningsloven), true isolation is a legal safeguard. Shared kernels introduce theoretical vectors for memory scraping between tenants. KVM mitigates this significantly.
The Verdict
If you are building a dev environment to test code quickly, LXC is a fun, lightweight tool that is rapidly evolving. If you are selling cheap web hosting, OpenVZ is maximizing your profit margins.
But if you are a Systems Architect responsible for uptime during the Black Friday rush or ensuring low latency to the NIX exchange in Oslo, you stop playing with shared kernels. You deploy on KVM.
At CoolVDS, we have standardized on KVM virtualization backed by enterprise-grade SSD storage. We don't oversell, and we don't let your neighbors steal your CPU cycles. Stability isn't an option; it's the baseline.
Need a server that actually respects your `sysctl` settings? Deploy a KVM instance with CoolVDS today and experience the difference of dedicated resources.