Console Login

OpenVZ vs. KVM: Why Your "Guaranteed" RAM is a Lie (And How to Fix It)

OpenVZ vs. KVM: Why Your "Guaranteed" RAM is a Lie

Let’s be honest. We’ve all been there. You grab a cheap VPS for a dev environment, maybe something running CentOS 5 or 6, and it feels fast—until it doesn't. You run top and see plenty of free memory, yet your MySQL process just crashed with an "Out of Memory" error. You restart httpd, and it hangs.

Welcome to the deceptive world of OpenVZ and the dreaded user_beancounters. While container-based virtualization has revolutionized the hosting market by driving prices down, it comes with a hidden cost that most providers won't tell you about: you are not really in control of your kernel.

As a sysadmin managing infrastructure across Oslo and Stavanger, I've seen too many projects fail because the team treated an OpenVZ container like a dedicated server. They are not the same. Today, we're going to look at why, look at the code that proves it, and discuss why true hardware virtualization (like KVM used at CoolVDS) is the only sane choice for production workloads in 2012.

The Architecture: Shared Kernel vs. True Virtualization

To understand the pain, you have to understand the architecture. In an OpenVZ environment, every VPS on the host node shares the exact same Linux kernel (usually a heavily patched RHEL6 kernel nowadays). There is no hardware abstraction layer. You are essentially running in a chroot on steroids.

This means if the host node is running kernel 2.6.32-042stab044.17, so are you. You cannot load your own kernel modules. You cannot tune standard TCP/IP sysctls unless the host allows it. And most importantly, memory management is... creative.

The "Privvmpages" Trap

In OpenVZ, you aren't limited by physical RAM usage; you are limited by allocated memory pages. This parameter is called privvmpages.

Pro Tip: Unlike KVM or Xen, where swap is actual disk space, OpenVZ "burst" memory is often just a promise. If the host node runs out of real RAM, your processes get killed, even if your VPS technically has "free" burst memory available.

Here is a war story from last week. We were migrating a Magento store for a client in Trondheim. The VPS had "2GB Guaranteed RAM". We tuned the InnoDB buffer pool in /etc/my.cnf to 1GB, leaving plenty for Apache.

[mysqld] innodb_buffer_pool_size = 1G query_cache_size = 32M max_connections = 150

The site kept crashing during import. No errors in /var/log/mysqld.log. No errors in Apache. Just a silent death. I logged in and checked the one file that matters in OpenVZ:

cat /proc/user_beancounters

The output confirmed my suspicions:

Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize       15046286   16452044   42467328   46714060          0
            lockedpages           0          0       2048       2048          0
            privvmpages      261944     262144     262144     524288     482910
            physpages        120411     140512          0 2147483647          0
            numproc              84        125        240        240          0
            tcpsndbuf       2415520    4614304    8623104   12985344          0

Look at that failcnt (fail count) on privvmpages: 482,910. That means nearly half a million times, the application requested memory and the kernel said "No," simply because we hit the artificial barrier set by the hosting provider's config.

Why KVM (and CoolVDS) is Different

This is why we advocate for KVM (Kernel-based Virtual Machine). With KVM, which is the standard for all CoolVDS instances, the hypervisor emulates physical hardware. You get your own kernel. You get your own virtual disk. If you have 2GB of RAM, the Linux kernel inside your VM manages that 2GB exactly how it sees fit.

If you want to run a custom kernel for TCP optimization to lower latency between your Oslo users and your server, you can.

Benchmarking Disk I/O: The Noisy Neighbor Problem

Another massive issue with containers is disk I/O contention. In OpenVZ, the file system is usually simfs. It's just a directory on the host's ext4 partition. If another customer on the same physical node decides to run a massive backup or compile a kernel, your disk performance tanks.

Let's look at a simple dd test on a loaded OpenVZ node vs. a CoolVDS SSD instance:

# The "Bad" Neighborhood (Oversold OpenVZ) [root@vps ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 18.2431 s, 58.9 MB/s

58 MB/s is unacceptable for a database server in 2012. You might as well be running on a laptop hard drive. Now, look at a proper KVM setup backed by Enterprise SSD storage:

# CoolVDS KVM Instance (High-speed I/O) [root@vps ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 3.1201 s, 344.1 MB/s

That is the difference between a page load that bounces a user and a sale.

Legal & Local Nuances: Norway's Data Protection

Performance isn't the only metric. We operate under the Norwegian Personopplysningsloven. The Data Inspectorate (Datatilsynet) is strict about where data lives and who controls it. When you use shared kernel containers, the isolation barrier is thinner. A kernel exploit on the host could theoretically expose the memory of all containers.

For organizations handling sensitive data within the EEA, full hardware virtualization offers a much stronger security boundary. It demonstrates a higher level of due diligence in protecting customer data, which is crucial for compliance with the EU Data Protection Directive.

Optimizing for the "Burst"

If you are stuck on OpenVZ legacy hosting, you need to survive until you can migrate. Here is a snippet for your nginx.conf to limit worker connections and prevent hitting the numproc (number of processes) beancounter limit, which is another common killer.

worker_processes 2; events { worker_connections 1024; use epoll; } http { # Limit request rates to save CPU cycles limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; server { location /login.php { limit_req zone=one burst=5; } } }

The Verdict

OpenVZ was great for the era of $5 hosting when we just needed to host static HTML files. But in 2012, web applications are complex beasts. Magento, WordPress with caching plugins, and heavy MySQL usage demand real resources, not "burstable" promises.

If you care about stable latency to NIX (Norwegian Internet Exchange) and want to ensure your database doesn't vanish because a neighbor started a backup, it's time to leave the container behind.

Don't let legacy virtualization kill your uptime. Deploy a true KVM instance with high-performance SSD storage on CoolVDS today, and stop counting beans.