Console Login

OpenVZ Containers: Performance Savior or Kernel-Panic Waiting to Happen?

OpenVZ Containers: Performance Savior or Kernel-Panic Waiting to Happen?

It is 3:00 AM. Your Nagios pager is screaming. Your client's Magento store, hosted on that budget VPS you picked up for $10 a month, has gone dark. You SSH in (eventually), run top, and see... nothing. No load. Plenty of free memory. Yet, Apache is zombies and MySQL has crashed with a cryptic error.

Welcome to the world of OpenVZ resource limits. As a sysadmin who has spent the last five years debugging virtual environments from Oslo to Tromsø, I've seen the user_beancounters file destroy more uptime than any DDoS attack.

OpenVZ is the engine powering the explosion of cheap VPS hosting across Europe right now. It is brilliant technology, but it is also a minefield of shared resources and "burstable" promises. If you are deploying production infrastructure in 2012, you need to understand exactly what is happening under the hood, or you are just gambling with your uptime.

The Architecture: A Shared Kernel Reality

Unlike Xen or the rapidly maturing KVM (Kernel-based Virtual Machine), OpenVZ is operating system-level virtualization. There is no hypervisor emulating hardware. Every container on the host node shares the exact same Linux kernel.

The Pros: It is fast. Blisteringly fast. Because there is no overhead for instruction translation or hardware emulation, an OpenVZ container performs almost exactly like a bare-metal server for CPU-bound tasks. The density is also higher, which is why providers love it—they can cram 50 containers onto a single server where they might only fit 20 KVM instances.

The Cons: You are at the mercy of the kernel version the host is running. Need a specific kernel module for your VPN tunnel? You can't load it. Need to tweak `sysctl` parameters that affect the global network stack? Access denied.

The "Burstable RAM" Lie

The most dangerous aspect of OpenVZ for high-load applications is memory management. Providers sell you "512MB RAM + 1GB Burst." It sounds great. In reality, it introduces the User Bean Counters (UBC).

On a dedicated server or KVM VPS, if you run out of RAM, the kernel starts swapping. It's slow, but the system stays alive. On OpenVZ, depending on the configuration (SLM vs. UBC), the kernel might simply kill your processes instantly when you hit a specific barrier, even if the host node has 32GB of free RAM available.

Here is how you diagnose if your VPS provider is choking your resources. Run this command inside your container:

cat /proc/user_beancounters

You will see output that looks like this. Pay close attention to the last column, failcnt.

Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize        2607865    4892390   14336000   14790160          0
            lockedpages           0          0        256        256          0
            privvmpages       68912     149234      65536      69632       142
            shmpages            667        667      21504      21504          0
            numproc              34         98        240        240          0
            physpages         29411      48912          0 2147483647          0
            vmguarpages           0          0      33792 2147483647          0
            oomguarpages      29411      48912      26112 2147483647          0
            numtcpsock           12         64        360        360          0
            numflock              4         11        188        206          0
            numpty                1          1         16         16          0
            numsiginfo            0         14        256        256          0
            tcpsndbuf        107144     528600    1720320    2703360          0
            tcprcvbuf        196080    1760468    1720320    2703360        315
            othersockbuf      13348     517616    1126080    2097152          0
            dgramrcvbuf           0       8512     262144     262144          0
            numothersock         11         84        360        360          0
            dcachesize            0          0    3409920    3624960          0
            numfile            1098       3456       9312       9312          0
            dummy                 0          0          0          0          0

See the 142 under failcnt for privvmpages? That means 142 times, an application asked for memory and the kernel said "No," likely causing a segfault or a crash. The 315 under tcprcvbuf means incoming network packets were dropped because the buffer was full. This results in packet loss that looks like network lag but is actually a resource configuration limit set by your host.

When to Use OpenVZ vs. KVM

OpenVZ isn't useless. It is excellent for non-critical development environments, VPN endpoints (if TAP is enabled), and static web serving where memory footprints are predictable. However, for serious database workloads, it presents significant risks.

The Database Problem

MySQL and PostgreSQL rely heavily on the system's ability to manage buffers and caches. In an OpenVZ environment, the "memory" reporting is often misleading. The free -m command might show you the host's memory, or a virtual limit, not what is physically allocated to you.

If you are tuning my.cnf for a MySQL 5.5 InnoDB deployment, you typically set the buffer pool to 70-80% of available RAM:

[mysqld]
innodb_buffer_pool_size = 2G
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table = 1

In a KVM environment, that 2GB is reserved for you. In OpenVZ, if the host node comes under pressure from a "noisy neighbor" (another customer running a heavy script), the kernel might reclaim that memory aggressively, causing MySQL to crash instantly to protect the node integrity.

Pro Tip: If you must use OpenVZ for MySQL, disable InnoDB and stick to MyISAM if possible, or keep your innodb_buffer_pool_size extremely conservative (under 50% of guaranteed RAM). It hurts performance, but it keeps the service running.

The CoolVDS Approach: Reliability First

This is why at CoolVDS, we have shifted our primary infrastructure strategy towards KVM virtualization for all production-grade plans. While we offer managed containers for specific use cases, we believe that in 2012, hardware prices have dropped enough—especially with the introduction of high-performance SSD storage arrays—that sacrificing isolation for density is no longer necessary.

When you deploy a VPS with us, you get:

  • True Hardware Virtualization: Your kernel is your own. Load whatever modules you want.
  • Guaranteed Resources: If you buy 4GB of RAM, 4GB is allocated to your VM instance. No bean counters, no fail counts.
  • Norwegian Data Integrity: Your data sits in our Oslo datacenter. We operate under the strict guidelines of the Personal Data Act (Personopplysningsloven) and the Datatilsynet. This is critical for compliance if you are handling Norwegian customer data.

Local Latency Matters

If your target market is Norway, hosting in Germany or the UK adds 20-40ms of latency. Hosting in the US adds 100ms+. By peering directly at NIX (Norwegian Internet Exchange), CoolVDS ensures that your packets take the shortest possible route to Telenor, NextGenTel, and Altibox fiber networks.

You can verify your latency to our test IP. A solid connection from Oslo should look like this:

$ ping -c 4 oslo.speedtest.coolvds.com
PING oslo.speedtest.coolvds.com (185.x.x.x) 56(84) bytes of data.
64 bytes from 185.x.x.x: icmp_seq=1 ttl=58 time=1.84 ms
64 bytes from 185.x.x.x: icmp_seq=2 ttl=58 time=1.92 ms
64 bytes from 185.x.x.x: icmp_seq=3 ttl=58 time=1.88 ms

--- oslo.speedtest.coolvds.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 1.842/1.880/1.921/0.054 ms

Conclusion

OpenVZ served us well during the early days of VPS hosting, bridging the gap between shared hosting and expensive dedicated servers. But for professional environments, the unpredictability of shared kernel resources is a liability you can't afford.

Stop fighting with failcnt. Move your workloads to a platform that respects your resource allocation.

Ready to stabilize your stack? Deploy a KVM instance on CoolVDS today and experience the difference of dedicated resources and SSD speed.