OpenVZ vs. Xen: The Truth About Container Virtualization in 2010
Let's be honest. If you are reading this, you are likely tired of shared hosting accounts that choke the moment your site gets a Digg effect or a front-page mention on VG.no. You need a Virtual Private Server (VPS). But the market right now is flooded with cheap offers, and most of them are hiding a dirty secret: aggressive overselling via OpenVZ.
I've spent the last six months migrating high-traffic e-commerce clusters from dedicated iron to virtualized environments. I've seen nodes crash because one user decided to compile a kernel inside a container (spoiler: you can't), and I've seen databases starve because of I/O wait times. As we close out 2010, understanding the underlying tech—specifically OpenVZ containers versus Xen paravirtualization—is the only way to protect your uptime.
The Architecture: A Chroot on Steroids?
OpenVZ is not hardware virtualization. Unlike Xen, which hypervises the hardware and lets you run your own kernel, OpenVZ uses a single, shared Linux kernel (usually a RHEL/CentOS 5 patched kernel) across all guests. Think of it as a highly advanced `chroot` environment.
The Good: Raw Efficiency
Because there is no hypervisor overhead emulating hardware instructions, OpenVZ is fast. Incredibly fast. A `syscall` inside an OpenVZ container is just a function call. In our benchmarks at CoolVDS, an Apache 2.2 web server running on OpenVZ handles static file requests about 3-5% faster than on Xen, simply because context switching is cheaper.
The Bad: The "Noisy Neighbor" and Kernel Limits
Here is the trade-off. Since you share the kernel, you share the fate of the node. If a neighbor triggers a kernel panic, your server goes down too. Furthermore, you cannot load your own kernel modules. Need `ip_conntrack` for a complex firewall rule? You better hope the host node has it enabled. If you need a custom VPN tunnel module not present in the host kernel, you are out of luck.
The UBC Nightmare: Understanding /proc/user_beancounters
Most sysadmins look at `top` or `free -m` and think everything is fine. On OpenVZ, those tools lie. The real truth lives in `/proc/user_beancounters`. This file tracks the User Bean Counters (UBC), a set of limits imposed on your container.
The most dangerous parameter is privvmpages. This is the memory allocation limit. If you hit this, `malloc()` fails. Your MySQL process doesn't swap; it crashes. I've debugged countless "random" crashes that were actually just a strict `privvmpages` limit being hit, even though `free -m` showed available RAM.
Pro Tip: Always check the `failcnt` column in `/proc/user_beancounters`. If it's anything other than 0, your applications are being silently strangled by the hypervisor limits.
Storage Wars: SAS 15k vs. The SSD Revolution
In 2010, disk I/O is still the biggest bottleneck. OpenVZ containers share the host's filesystem. If one user runs a heavy `rsync` or a badly indexed SQL query, the I/O Wait (iowait) spikes for everyone on that physical disk.
This is where hardware choice matters. Most budget providers in Oslo are still spinning SATA drives in RAID-5. It's cheap, but slow. At CoolVDS, we are aggressively moving toward Enterprise SSD storage (Solid State Drives) and 15k RPM SAS drives in RAID-10. The random write performance of SSDs obliterates the I/O contention issue inherent in container virtualization.
| Feature | OpenVZ (Container) | Xen (Paravirtualization) |
|---|---|---|
| Kernel | Shared with Host | Isolated / Custom |
| Overhead | Near Zero | Low (2-5%) |
| Isolation | Moderate | High |
| Best Use Case | DNS, Web Serving, Dev | Databases, Java Apps, VPNs |
Norwegian Compliance: Keep It Local
We are seeing tighter scrutiny from Datatilsynet regarding where data physically resides. Under the Personal Data Act (Personopplysningsloven), ensuring your customer data stays within Norwegian borders (or EEA) is critical for compliance. Latency is another factor. If your target audience is in Oslo or Bergen, routing traffic through a datacenter in Frankfurt adds 20-30ms of unnecessary latency. By peering directly at NIX (Norwegian Internet Exchange), we keep local pings under 5ms.
The Verdict
OpenVZ is a fantastic tool for efficiency and cost-effective scaling, provided the host manages the resources honestly. It allows for "Burstable RAM," meaning you can temporarily use unused RAM from the host node—a great feature for handling sudden traffic spikes.
However, if you need strict isolation, custom kernel modules, or guaranteed resources that no neighbor can touch, you need a hypervisor-based solution like Xen or KVM (which is rapidly maturing in RHEL 6).
At CoolVDS, we offer both. We configure our OpenVZ containers with generous `failcnt` buffers and back them with high-performance storage to mitigate the I/O bottleneck. We don't oversell, because we know that a slow server eventually becomes a cancelled server.
Ready to stop fighting `failcnt` errors? Deploy a properly tuned OpenVZ or Xen instance on CoolVDS today and experience the stability of premium Norwegian hosting.