Console Login

OpenVZ Containers vs. Xen: The Truth About Virtualization in 2009

OpenVZ Containers: The Efficiency Dream or the SysAdmin's Nightmare?

It is 2009. The economy is tightening, and every CTO from Oslo to Bergen is screaming about "consolidation." They want to squeeze every last CPU cycle out of their racks. Naturally, the conversation turns to virtualization. But not all virtual machines are created equal.

If you have been in the trenches managing hosting clusters, you know the debate: OpenVZ versus Xen. One offers incredible density and native performance; the other offers isolation and sanity. At CoolVDS, we see clients migrating from budget hosts every day because they fell into the "burst RAM" trap.

Let's cut through the marketing fluff. I am going to show you exactly how OpenVZ handles resources, why your MySQL database keeps crashing without an error log, and how to verify if your host is overselling their nodes.

The Architecture: Shared Kernel vs. Hypervisor

To understand the pain points, you have to understand the kernel. In a Xen environment (which we use for our premium tier), you are running a distinct kernel. If you panic your kernel, your VM goes down. Your neighbor stays up.

OpenVZ is different. It is Operating System-level virtualization. There is one Linux kernel (usually a patched RHEL/CentOS 5 kernel) handling everyone. The container is just a glorified chroot with resource limits.

Pro Tip: Because you share the kernel, you cannot load your own kernel modules. Need a specific VPN tunneling module or a custom file system driver? You are out of luck on OpenVZ unless your host enables it on the hardware node.

The "Burst RAM" Illusion

This is where things get messy. OpenVZ uses a concept called "Burstable RAM." Providers sell you a VPS with "512MB Guaranteed / 1024MB Burst." It sounds great. In reality, that burst memory is a gamble. If the physical node is under load, that burstable memory vanishes instantly. If your application relies on it, the kernel invokes the OOM (Out of Memory) killer and terminates your `mysqld` process.

The Smoking Gun: /proc/user_beancounters

If you manage an OpenVZ VPS, this file is your bible. It tells you exactly which resource limits you are hitting. Unlike `top` or `free -m`, which can lie inside a container, the User Beancounters do not bluff.

Here is a script I use on every new client server to check for stability issues:

#!/bin/bash
# Check for failed resource allocations in OpenVZ

if [ ! -f /proc/user_beancounters ]; then
    echo "Error: Not an OpenVZ container."
    exit 1
fi

echo "Checking for resource exhaustion..."
cat /proc/user_beancounters | awk 'NR < 2 || $6 > 0 { print $0 }'

If that script outputs anything other than the header, you have a problem. Specifically, look at the failcnt (failure count) column.

Here is what a healthy container looks like versus one that is melting down:

Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize        2641029    2942012   14336000   14790160          0
            lockedpages           0          0        256        256          0
            privvmpages       64105      64200      65536      69632          5
            physpages         24101      24950          0 2147483647          0
            vmguarpages           0          0      33792 2147483647          0
            oomguarpages      24101      24950      26112 2147483647          0
            numtcpsock           12         12        360        360          0
            numflock              2          2        188        206          0
            numpty                1          1         16         16          0
            numsiginfo            0          1        256        256          0
            tcpsndbuf        103220     103220    1720320    2703360          0
            tcprcvbuf             0       2268    1720320    2703360          0
            othersockbuf       2268       2268    1126080    2097152          0
            dgramrcvbuf           0          0     262144     262144          0
            numothersock         12         12        360        360          0
            dcachesize            0          0    3409920    3624960          0
            numfile             354        354       9312       9312          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            numiptent            10         10        128        128          0

See that 5 in the failcnt column for privvmpages? That means five times, an application tried to allocate memory and the kernel said "No." In a production environment, that likely means five dropped connections or five crashed PHP workers.

Configuring MySQL for OpenVZ

The default my.cnf in CentOS 5 is not optimized for the strict memory barriers of OpenVZ. InnoDB defaults will eat your RAM allocation for breakfast. If you are running a modest LAMP stack, you need to cap your buffers.

Edit your /etc/my.cnf to prevent the OOM killer from targeting MySQL:

[mysqld]
# Optimize for small memory footprint (assuming 512MB VPS)
skip-locking
skip-bdb
skip-innodb

# If you MUST use InnoDB, keep the pool small
# innodb_buffer_pool_size = 16M

key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

# STRICTLY limit connections to avoid memory runaway
max_connections = 50

Restart the service with:

service mysqld restart

This configuration forces MySQL to be lean. You trade some performance for stability, which is the only valid trade-off in a constrained container environment.

The Compliance Angle: Datatilsynet & Shared Kernels

Operating in Norway involves strict adherence to the Personal Data Act (Personopplysningsloven). While Safe Harbor frameworks cover data transfer, local storage security is paramount.

Here is the uncomfortable truth: In OpenVZ, root on the host node can see every file in your container easily. Furthermore, because of the shared kernel, a vulnerability in the kernel affects every single customer on that node. In 2009, isolation is security.

If you are storing sensitive customer data (Personnummer, health data), we strongly recommend moving to a solution with stricter isolation, like our Xen-based plans, or ensuring your provider enforces strict SELinux policies on the host node.

Network Latency and the "Noisy Neighbor"

Another issue with OpenVZ is the network stack. You usually get a `venet0` interface, which is a virtual network device. It is fast, but it shares the host's TCP/IP stack tables.

If another customer on the node gets hit with a DDoS attack, the connection tracking table (conntrack) on the host fills up. Suddenly, your legitimate packets are dropped.

You can check your limit with:

cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max

If you see packet loss but your bandwidth usage is low, check the numtcpsock in your beancounters.

When is OpenVZ the Right Choice?

I don't want to paint OpenVZ as purely evil. It has massive advantages:

  • Speed: Without the overhead of hardware emulation, disk I/O and CPU execution are near-native.
  • Scalability: You can resize an OpenVZ container from 256MB to 2GB RAM instantly without a reboot.
  • Cost: It allows us to pack more efficiency into our racks, lowering the price for you.
Feature OpenVZ Xen (HVM/PV)
Isolation Shared Kernel (Process Level) Dedicated Kernel (Hardware Level)
Performance Overhead < 1% 2% - 5%
Kernel Modules Restricted Allowed (Load whatever you want)
Swap Memory Virtual / Burst Dedicated Partition

The CoolVDS Approach

The problem isn't the technology; it is the implementation. Most budget hosts put 100 containers on a server designed for 20. The disk I/O wait shoots up, and everyone suffers.

At CoolVDS, we use 15k RPM SAS drives in RAID-10 arrays to ensure that even if we use OpenVZ, the I/O bottleneck is virtually non-existent. We also monitor failcnt on our host nodes proactively. If we see a neighbor getting noisy, we move them or throttle them before they impact your NIX latency.

If you need raw performance for a development environment or a lightweight web server, OpenVZ is fantastic. But do it right.

Ready to deploy? Whether you choose the efficiency of OpenVZ or the iron-clad isolation of Xen, we have the infrastructure in Oslo ready for you. Stop fighting failcnt and start coding.

Deploy your reliable VPS with CoolVDS today.