Console Login

OpenVZ vs. KVM: The Truth About Container Virtualization in 2012

OpenVZ vs. KVM: The Truth About Container Virtualization in 2012

It happens every week. A developer calls me, frantic. Their site is down. The database is corrupted. They swear they haven't touched the config files in months. They check top and load average is low. They verify memory, and it looks like they have 2GB free. Yet, MySQL keeps crashing with a generic error.

The culprit? Almost always the same thing: OpenVZ overselling.

In the Norwegian hosting market, we see a race to the bottom on price. Providers promise you the moon—"Unlimited Bandwidth," "Burstable RAM"—for the price of a coffee in Oslo. But when you are running a high-traffic Magento store or a critical CRM for a client in Stavanger, "burstable" resources are a liability, not a feature. Today, we are going to look under the hood of OpenVZ, analyze the infamous user_beancounters file, and discuss when you should stick to containers and when you need to upgrade to true hardware virtualization like KVM (Kernel-based Virtual Machine).

The Architecture: Shared Kernel vs. Private Kernel

To understand the performance issues, you have to understand the architecture. OpenVZ is operating-system-level virtualization. It creates multiple isolated containers (VPS) on a single physical server. Crucially, all containers share the same Linux kernel (usually a heavily patched RHEL6 kernel, like 2.6.32).

This is great for density. A host can cram hundreds of containers onto one server because there is no overhead for emulating hardware. But this efficiency comes at a steep price: Isolation.

The "Noisy Neighbor" Effect

If your neighbor on the same physical node decides to compile a massive C++ application or gets hit by a DDoS attack, the shared kernel struggles to schedule CPU time for your processes. You feel their load. In the hosting industry, we call this the "Noisy Neighbor" effect. On KVM or Xen, the hypervisor acts as a strict traffic cop. On OpenVZ, it's more like a polite suggestion.

Pro Tip: If you need to load kernel modules like ip_conntrack for a custom firewall or you want to run a VPN using TUN/TAP devices, OpenVZ often requires the host provider to enable it manually for you. On KVM, you own the kernel. You do what you want.

Diagnosing the Phantom Crash: The Bean Counters

This is the technical part most providers hide. In a standard Linux environment, if you run out of RAM, the OOM (Out of Memory) killer sacrifices a process. In OpenVZ, you have a second layer of limits called UBC (User Bean Counters).

You might see free RAM in your standard monitoring tools, but if you hit a UBC limit, your application crashes without warning. Here is how you check if your provider is throttling you silently.

Run this command on your VPS:

cat /proc/user_beancounters

You will see a table like this:

       uid  resource           held    maxheld    barrier      limit    failcnt
      101  kmemsize        2876403    3045000   14336000   14660000          0
           lockedpages           0          0        256        256          0
           privvmpages       64230      64500     262144     272000         48
           physpages         34211      35000          0 2147483647          0
           numfile            4500       4600       8192       8192         15

Look at the last column: failcnt. If that number is anything other than zero, you have a problem.

  • privvmpages: This is your allocated memory. In the example above, the failcnt is 48. This means 48 times, an application asked for memory, the OS said "yes" (because maxheld < limit), but the OpenVZ barrier said "no." The result? Segfaults.
  • numfile: The number of open files. A failcnt here means Nginx or Apache couldn't open a new log file or socket.

If you see fail counts rising, you are being throttled, regardless of what free -m tells you.

Optimizing for OpenVZ: Survival Mode

If you are stuck on a legacy OpenVZ contract or budget constraints force you to stay, you must optimize your software stack to be lightweight. You cannot afford the memory bloat of a default Apache configuration.

1. Switch to Nginx

Apache's prefork model eats RAM for breakfast. Nginx's event-driven model is essential for 2012-era VPS hosting. Here is a memory-safe configuration for a 512MB VPS:

user www-data;
worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    
    # Disable access logs if I/O is a bottleneck
    access_log off;
    
    sendfile        on;
    keepalive_timeout  15;
    
    # Gzip helps reduce bandwidth but uses CPU. 
    # On a weak CPU OpenVZ, be careful.
    gzip  on;
    gzip_comp_level 2;
}

2. Tune MySQL (InnoDB)

The default MySQL 5.5 configuration is designed for dedicated servers. On a VPS, you need to lower the buffer pool. If you don't, MySQL will attempt to grab memory that OpenVZ claims is "burstable" but isn't actually physically available, leading to immediate termination.

[mysqld]
# If you have 1GB RAM, do not set this higher than 256M on OpenVZ
innodb_buffer_pool_size = 256M

# Reduce connection overhead
max_connections = 50
key_buffer_size = 16M
query_cache_limit = 1M
query_cache_size = 16M

Why CoolVDS Chooses KVM and SSDs

At CoolVDS, we have moved away from the "overselling" model common in cheap European hosting. While OpenVZ has its place for development environments or non-critical proxies, serious business infrastructure requires consistency.

We utilize KVM virtualization. When you buy 2GB of RAM from us, that RAM is hardware-reserved for your kernel. Your neighbor cannot steal it. Furthermore, we address the biggest bottleneck in virtualized hosting: Disk I/O.

Traditional hosting uses SATA drives running at 7200 RPM. When twenty VPS instances try to write logs simultaneously, the disk queue spikes, and your site loads slowly. This is disastrous for SEO.

CoolVDS deploys Enterprise SSD storage in RAID-10 arrays. The IOPS (Input/Output Operations Per Second) on SSDs are orders of magnitude higher than spinning rust. In our benchmarks, a MySQL import that takes 14 minutes on a standard SAS drive finishes in under 2 minutes on our SSD nodes.

Legal & Latency: The Norwegian Context

Latency matters. If your customer base is in Oslo or Bergen, hosting in Texas or even Frankfurt adds unavoidable milliseconds to every packet trip. By peering directly at NIX (Norwegian Internet Exchange), we ensure your packets stay local.

Furthermore, with the Personopplysningsloven (Personal Data Act) enforced by Datatilsynet, knowing exactly where your data physically resides is becoming critical for compliance. On our KVM instances, you have full encryption control at the block level—something impossible to achieve securely on a shared OpenVZ kernel.

The Verdict

OpenVZ is not evil. It is an efficient technology from the mid-2000s designed for density. But in 2012, with web applications becoming more resource-intensive, the trade-offs are becoming harder to justify for production environments.

If you are tired of debugging mysterious crashes and fighting for CPU cycles, it is time to own your resources.

Don't let slow I/O kill your SEO. Deploy a test KVM instance on CoolVDS in 55 seconds and feel the SSD difference.