OpenVZ Containers: The Good, The Bad, and The "Failcnt"
Let’s be honest for a second. If you’ve been in the hosting game as long as I have, you’ve likely been burned by a "budget VPS" that promised the moon and delivered a pocket calculator. It is September 2012, and the virtualization market is flooded. Everyone is spinning up nodes, slapping a WHMCS frontend on them, and calling themselves a "Cloud Provider."
But under the hood, not all virtual servers are created equal. Today, we are tearing apart OpenVZ. It is the technology powering 70% of the budget VPS Norway market, but is it fit for your production stack? As a sysadmin who has spent too many nights staring at top while a client's site crawls, I’m here to tell you exactly how it works, when to use it, and when to run screaming toward full hardware virtualization like KVM or Xen.
The Architecture: Chroot on Steroids
Unlike Xen or KVM (Kernel-based Virtual Machine), OpenVZ doesn't simulate hardware. It doesn't give you your own kernel. It is essentially operating system-level virtualization. Think of it as a super-advanced chroot environment. All containers on the host node share the same kernel.
This architecture is brilliant for efficiency. There is zero emulation overhead. The disk I/O goes straight to the filesystem, and the CPU instructions are executed natively. This is why an OpenVZ container boots in seconds while a KVM instance takes nearly a minute.
The "Noisy Neighbor" Problem
However, shared kernels mean shared fate. If one user on the node manages to trigger a kernel panic (perhaps with a buggy kernel module or a fork bomb that bypasses limits), the entire physical server goes down. Every single customer on that box goes offline.
Furthermore, resource isolation in OpenVZ relies on User BeanCounters (UBC). This is where the nightmares begin for database administrators.
The Horror of /proc/user_beancounters
In a standard Linux environment, if you run out of RAM, the OOM (Out of Memory) killer sacrifices a process to save the system. In OpenVZ, memory management is far more rigid. You have limits on specific kernel objects—TCP buffers, directory entries, and memory pages.
If you are running a MySQL database on a cheap VPS and it keeps crashing without an error log, check your beancounters. Here is a command I run immediately when logging into a troubled OpenVZ node:
cat /proc/user_beancounters
You will see output that looks like this (and if you see numbers in the last column, you have a problem):
Version: 2.5
uid resource held maxheld barrier limit failcnt
101: kmemsize 2605806 2863640 11055923 11377049 0
lockedpages 0 0 256 256 0
privvmpages 68445 89560 65024 69632 14023
shmpages 660 660 21504 21504 0
numproc 29 48 240 240 0
physpages 28414 36768 0 2147483647 0
vmguarpages 0 0 33792 2147483647 0
oomguarpages 28414 36768 26112 2147483647 0
tcpsndbuf 138848 409532 1720320 2703360 0
tcprcvbuf 130224 686644 1720320 2703360 0
See that failcnt (failure count) of 14023 on privvmpages? That means 14,023 times, an application tried to allocate memory and the kernel said "No," not because the physical server was full, but because an artificial barrier was hit. This kills MySQL tables. This kills Java heaps.
Pro Tip: If you are configuring MySQL 5.5 on OpenVZ, be extremely conservative with yourinnodb_buffer_pool_size. The overhead of InnoDB can easily push you over thekmemsizebarrier even if you have "free RAM" visible infree -m.
Configuring for Stability
If you must use OpenVZ (and it is great for lightweight web servers running Nginx + PHP-FPM), you need to optimize your stack to be lean. You cannot rely on swap space effectively in older OpenVZ kernels (RHEL5 based), though the newer RHEL6 (vSwap) kernels improve this.
Here is a snippet for your my.cnf to reduce memory footprint on a 512MB VPS:
[mysqld]
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
# Strictly limit InnoDB if you don't need ACID compliance on everything
innodb_buffer_pool_size = 32M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
The CoolVDS Difference: Transparency & Hardware
This brings us to the elephant in the room: Overselling. Because OpenVZ resources are soft limits, unethical hosts stack 100 users on a server meant for 20. When everyone hits the disk at once, your I/O wait shoots through the roof.
At CoolVDS, we approach this differently. Whether you choose our containerized solutions for development or our KVM instances for production, we map resources strictly.
Data Integrity and Compliance in Norway
For our Norwegian clients, data location is not just about low latency (though pinging 2ms to NIX is nice); it's about the law. With the strict enforcement of the Personal Data Act (Personopplysningsloven) by Datatilsynet, you need to know exactly where your data lives.
We don't just throw your data into a generic cloud. We operate our own hardware. And speaking of hardware, while the industry is still largely spinning rust (HDD) or hybrid setups, we are aggressively moving toward pure SSD arrays. We are even testing early enterprise implementations of NVMe storage protocols in our labs to prepare for the next generation of I/O throughput.
When to Use Which?
| Feature | OpenVZ | KVM / Xen (CoolVDS Pro) |
|---|---|---|
| Kernel | Shared with Host | Dedicated (Customizable) |
| Performance | Native Speed (High) | Near Native (Minimal overhead) |
| Isolation | Low (Kernel Panics shared) | High (True Hardware isolation) |
| Modules | Restricted (No FUSE/IPSet often) | Full Control (Load anything) |
| Cost | Lowest | Moderate |
Conclusion
OpenVZ is a fantastic tool for staging environments, DNS servers, and lightweight proxies using Nginx. It is cost-effective and fast. But if you are deploying a mission-critical Magento store or a heavy database backend, do not gamble with shared kernels.
You need dedicated resources. You need ddos protection that doesn't null-route your IP at the first sign of trouble. You need the stability of KVM backed by high-speed storage.
Don't let a failcnt ruin your uptime statistics. Evaluate your workload, check your logs, and choose the right virtualization technology.
Need help migrating from an overloaded container? Deploy a KVM instance with CoolVDS today and experience the stability of dedicated resources.