OpenVZ vs. KVM: The Hidden Cost of "Burstable" Resources
It is 2011, and the hosting market is flooded with offers that seem too good to be true. You have seen them on WebHostingTalk: "50GB Space, 1TB Bandwidth, 1GB RAM... for $5/month." As a Systems Architect operating out of Oslo, I can tell you exactly how they do it. They are not wizards; they are overselling using OpenVZ.
For development environments or simple static sites, container-based virtualization like OpenVZ is efficient. It is lightweight. It is fast. But for a high-traffic Magento store or a critical MySQL database? It is a minefield.
I recently audited a client's server that was suffering from mysterious downtime every day at 14:00 CET. The logs showed nothing. The "free -m" command showed plenty of RAM. Yet, the database service kept being killed. The culprit wasn't their code; it was the virtualization technology.
The Lie of "Guaranteed" RAM
In a standard Hardware Virtualization Machine (HVM) environment like Xen or KVM (Kernel-based Virtual Machine)—which we are seeing mature rapidly with RHEL 6—RAM is RAM. If you are allocated 1024MB, that memory is reserved for you at the hypervisor level.
In OpenVZ, you are sharing the host's kernel. You are not a server; you are a glorified chroot with resource limits. The most dangerous limit is privvmpages. This is the accounting parameter that defines how much memory your applications can allocate, not necessarily how much physical RAM you have.
If your "noisy neighbor" on the same physical node decides to compile a massive kernel or run a fork bomb, the host's kernel might start killing processes in your container to save the node. This is resource contention, and it is the enemy of stability.
The Smoking Gun: /proc/user_beancounters
If you are on a VPS and suspect you are hitting these artificial limits, there is one file that tells the truth. Forget top; look at the beancounters.
cat /proc/user_beancounters
You will see output that looks like this:
uid resource held maxheld barrier limit failcnt
101 kmemsize 2863104 3436544 11055923 11377049 0
lockedpages 0 0 256 256 0
privvmpages 45211 49856 65536 69632 4829
physpages 22154 26411 0 2147483647 0
numproc 24 33 240 240 0
See that failcnt (failure count) column on the privvmpages row? That number is 4829. That means 4,829 times, your applications asked for memory and the kernel said "No," even if top said you had free swap. This causes silent crashes, corrupted MyISAM tables, and frustrated users.
Kernel Limitations and Security
Because OpenVZ containers share the host kernel (currently usually 2.6.18 or 2.6.32 on CentOS), you cannot load your own kernel modules. Need to run a specific VPN config requiring tun/tap devices? You have to beg your host to enable it. Need a specific iptables module for advanced firewalling? You are out of luck.
Furthermore, a kernel panic in one container can, in poorly configured environments, bring down the entire physical node. This is why for mission-critical hosting, isolation is not just a feature—it is a requirement.
Pro Tip: If you must run MySQL on a memory-constrained OpenVZ VPS, you must tune your InnoDB buffer pool strictly. Do not rely on defaults.
[mysqld]
# Optimize for 512MB VPS
key_buffer_size = 16M
query_cache_size = 8M
innodb_buffer_pool_size = 64M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 16M
max_connections = 50
Why KVM is the Future (and the Present at CoolVDS)
At CoolVDS, we have moved away from the overselling model. We utilize KVM (Kernel-based Virtual Machine) for our premium tiers. KVM turns the Linux kernel into a hypervisor. Each VPS has its own kernel, its own memory space, and its own disk I/O scheduling.
Why does this matter for your business in Norway?
- Data Integrity: With the Datatilsynet (Data Inspectorate) tightening regulations around data handling, you need absolute certainty that your data partitions are isolated.
- Performance Stability: We use high-performance RAID-10 SSD arrays. In a KVM environment, the I/O you pay for is the I/O you get. There is no "burst" marketing fluff.
- Latency: Our infrastructure is peered directly at NIX (Norwegian Internet Exchange). When you combine true hardware virtualization with local peering, you get ping times to Oslo DSL lines in the single digits.
Benchmarking I/O: The Truth Test
Don't just take a host's word for it. Run a simple dd test to check write speeds. On an overloaded OpenVZ node, you might see 15-20 MB/s. On a CoolVDS SSD KVM instance, you will see raw speed.
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
If that result returns anything less than 150 MB/s in 2011, your host is bottling performance.
Conclusion: Choose Architecture over Marketing
OpenVZ has its place. It is excellent for budget-conscious students or non-critical testing. But if your business relies on uptime, database integrity, and consistent performance, you cannot afford to share a kernel.
The transition to KVM offers the isolation of a dedicated server at a fraction of the price. With the introduction of SSD technology into the server market, the bottleneck is no longer the disk—it's the virtualization architecture.
Ready to stop fighting for resources? Deploy a true KVM instance on CoolVDS today. Experience the stability of dedicated hardware with the flexibility of the cloud.