Console Login

OpenVZ Containers: The Good, The Bad, and The Kernel Panic - A 2012 Reality Check

OpenVZ vs <a href="/microvms" class="keyword-link" title="Learn more about MicroVMs">KVM</a>: The 2012 Guide

OpenVZ Containers: The Good, The Bad, and The Kernel Panic

It is 2012, and the hosting market is flooded with "unlimited" offers. You have seen the ads on WebHostingTalk: 2GB RAM, 50GB Disk, $5 a month. It sounds like a steal, but if you are running a high-traffic Magento store or a critical MySQL cluster, it might just be a robbery. The culprit isn't always the hardware; often, it is the virtualization technology itself. Today, we are tearing down OpenVZ—the technology powering the budget VPS industry—and asking the hard questions about isolation, resource guaranteeing, and stability.

I have spent the last week debugging a client's server that kept dropping connections during peak traffic from Oslo. Their monitoring tools showed plenty of free RAM. The CPU load was low. Yet, Apache was segfaulting, and MySQL was crashing. The root cause? They hit a hidden limit in the OpenVZ user_beancounters that their provider forgot to mention. If you care about uptime, you need to understand what is happening under the hood.

The Architecture: Shared Kernels and Thin Walls

Unlike KVM (Kernel-based Virtual Machine) or Xen, which provide hardware-level abstraction (HVM), OpenVZ is operating system-level virtualization. It uses a single Linux kernel—patched heavily—to split a physical server into multiple "containers" (VEs). There is no hypervisor layer translating instructions. Your VPS is essentially a glorified chroot environment with resource limits applied.

Pro Tip: Because you share the kernel, you cannot load your own kernel modules. Need a specific VPN module or a custom file system driver like FUSE? If the host node administrator hasn't enabled it, you are out of luck.

The Pros: Why OpenVZ is Still Around

I am not here to bury OpenVZ completely. It has valid use cases, particularly where density and raw I/O throughput are paramount, and isolation is secondary.

  • Native Performance: Since there is no instruction translation, disk I/O and CPU execution are near-native. On a well-tuned node with Enterprise SSDs, file system operations fly.
  • Scalability: You can change the resources of a container on the fly without rebooting. Need more RAM for a 10-minute compilation task? vzctl set 101 --ram 4G --save happens instantly.
  • Price: It allows providers to pack more customers onto a single chassis. This drives down the cost of entry, making it viable for dev environments or simple static sites.

The Cons: The "Burst RAM" Lie and Noisy Neighbors

The biggest issue with OpenVZ isn't the technology; it's the economics. Because resources are "soft," providers oversell them. They promise you 512MB of Guaranteed RAM and 1GB of "Burst" RAM. But Burst RAM is just borrowed memory. If your neighbor on the physical node decides to compile a kernel or run a Minecraft server, your "burst" vanishes, and your processes get killed by the OOM (Out of Memory) killer.

The Silent Killer: User BeanCounters

In a KVM or Xen VPS, if you run out of RAM, you swap. In OpenVZ, if you hit a limit defined in /proc/user_beancounters, the kernel simply refuses the memory allocation request. Applications are not designed to handle malloc() failures gracefully. They crash.

Here is how you check if your provider is throttling you silently. Run this command inside your VPS:

cat /proc/user_beancounters

You will see output like this:

       uid  resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize        2734080    2996224   14336000   14790656          0
            lockedpages           0          0        256        256          0
            privvmpages       76394      89442     262144     270336       4829
            shmpages            670        670      21504      21504          0
            numproc              34         45        240        240          0
            physpages         48234      59344          0 2147483647          0
            vmguarpages           0          0      33792 2147483647          0
            oomguarpages      48234      59344      26112 2147483647          0
            numtcpsock           12         15        360        360          0
            numflock              3          4        188        206          0
            numpty                1          1         16         16          0
            numsiginfo            0          1        256        256          0
            tcpsndbuf        104724     126900    1720320    2703360          0
            tcprcvbuf        192340     234668    1720320    2703360          0
            othersockbuf      13540      14984    1126080    2097152          0
            dgramrcvbuf           0          0     262144     262144          0
            numothersock         11         12        360        360          0
            dcachesize            0          0    3409920    3624960          0
            numfile             782        853       9312       9312          0
            dummy                 0          0          0          0          0

Look at the failcnt (failure count) column. In the example above, privvmpages has 4829 failures. This means 4,829 times the application asked for memory and was told "no," even though the operating system tools like top or free -m might have shown available memory. This is the disconnect that drives sysadmins insane.

Technical Comparison: OpenVZ vs. KVM/Xen

Feature OpenVZ (Containers) KVM (Full Virtualization)
Kernel Shared (RHEL6 2.6.32 mostly) Dedicated (Install any kernel)
Resource Isolation Soft limits (Burstable) Hard limits (Dedicated)
Swap Fake (Virtual Swap) Real dedicated partition
Modules Restricted (No iptables_nat usually) Full control (Load whatever you want)
Overhead Very Low (~1-2%) Medium (~5-10% for emulation)

The Tuning Reality: Surviving on OpenVZ

If you are stuck on an OpenVZ container, you must tune your stack conservatively. You cannot rely on the OS to manage memory pressure. You must configure MySQL and Apache to never exceed your Guaranteed RAM, ignoring the Burst capacity completely.

For a typical 512MB VPS, your my.cnf needs to be strict to avoid crashing tables:

[mysqld]
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

# Crucial for OpenVZ: Don't use huge InnoDB pools
innodb_buffer_pool_size = 64M
innodb_additional_mem_pool_size = 2M

Furthermore, check your TCP stack. OpenVZ containers often have small buffers by default. While you often cannot change `sysctl.conf` directly (it is read-only on many hosts), some providers enable write access to safe keys. If allowed, optimize for latency:

# Attempt to optimize network stack if permitted
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

The CoolVDS Approach: Performance Without the Gambling

At CoolVDS, we have analyzed the Norwegian hosting market, and the pattern is clear: too many providers are overselling resources to the point of instability. In a landscape where the Data Inspectorate (Datatilsynet) demands rigorous data integrity and uptime is critical for business continuity, gambling with shared kernels is a risk many cannot afford.

This is why CoolVDS has standardized on KVM virtualization for our primary product line. When you buy 2GB of RAM from us, that RAM is allocated to your VM and locked. No one else can touch it. We pair this with enterprise-grade RAID-10 SSD storage (and we are testing the new PCIe flash technology for even lower latency) to ensure that even heavy database writes don't stall.

However, for those who truly need the efficiency of containers, we offer Managed Containers. Unlike budget hosts, we limit the number of containers per node strictly to ensure that "Burst RAM" is actually available when you need it. Plus, our proximity to the NIX (Norwegian Internet Exchange) in Oslo ensures that your latency to local customers is consistently under 5ms.

Conclusion

OpenVZ is a powerful tool in the right hands, but a dangerous trap in the wrong ones. If you are running a static blog or a dev sandbox, the cost savings are attractive. But for production workloads where a privvmpages fail count means lost revenue, you need the isolation of hardware virtualization.

Stop fighting your neighbors for CPU cycles. Don't let unstable resource limits kill your project. Deploy a dedicated KVM instance with guaranteed resources on CoolVDS today, and experience the stability your code deserves.