Console Login

KVM vs OpenVZ: Why Shared Kernels Are Killing Your Production Performance

KVM vs OpenVZ: Why "Burstable" RAM is a Lie

It’s 3:00 AM. Your Nagios pager goes off. The load average on your main database server just spiked to 25.0. You SSH in, run top, and see... nothing. Your MySQL process is consuming 20% CPU. There is plenty of free RAM. Yet, the SSH session is lagging, and your HTTP response times to Oslo have jumped from 15ms to 2 seconds.

Welcome to the hell of Steal Time. Welcome to the reality of oversold OpenVZ containers.

In the current hosting market, too many providers are pushing cheap "Cloud VPS" solutions based on container technology like OpenVZ. While great for development sandboxes, relying on them for production workloads is a gamble with your uptime. At CoolVDS, we have taken a different stance by standardizing on KVM (Kernel-based Virtual Machine) for all serious deployments. Here is why the architecture matters more than the price tag.

The Architecture: Containers vs. Hypervisors

To understand the performance bottleneck, you have to look at the kernel. In an OpenVZ environment, every VPS on the host node shares the same Linux kernel. It is not true virtualization; it is chrooted isolation on steroids. If one user on the node triggers a kernel panic, or exploits a kernel vulnerability, everyone goes down.

KVM, which we use exclusively for our performance tier, turns the Linux kernel into a hypervisor. Each guest OS has its own kernel, its own memory management, and most importantly, strictly isolated resources.

The "Noisy Neighbor" Effect

On a shared kernel (OpenVZ), resource scheduling is fluid. Providers sell you "Burstable RAM." This sounds like a feature, but it is a bug. It means your RAM isn't actually yours—it's borrowed from a pool. When a neighbor gets hit with a DDoS or runs a heavy `make -j8` compile, the host kernel throttles you.

Check your system for "Steal Time" (st) right now:

$ top -b -n 1 | grep Cpu

If the %st value is above 0.0, your hypervisor is stealing cycles from you to give to someone else. On our KVM nodes, we enforce hard limits. Your CPU cores are reserved.

Configuration: Tuning for Isolation

When you have a KVM instance, you can tune the kernel parameters specifically for your workload—something impossible inside a container. For a high-traffic web server targeting the Norwegian market, we typically deploy a CentOS 6.3 stack with Nginx replacing Apache.

Here is a production-ready sysctl.conf optimization for a KVM instance handling high concurrency. Do not try this on OpenVZ; you likely won't have permission to modify these network stack parameters.

# /etc/sysctl.conf

# Increase system file descriptor limit
fs.file-max = 65535

# Allow for more PIDs (necessary for high concurrency)
kernel.pid_max = 65536

# Network tuning for low latency
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Protect against SYN flood attacks
net.ipv4.tcp_syncookies = 1

Apply these with sysctl -p. Because you are on KVM, these settings actually take effect on the kernel level.

The I/O Bottleneck: Spinning Rust vs. SSD

The biggest lie in 2012 hosting is disk space. A provider gives you 500GB of space, but it's on a shared 7.2k RPM SATA drive. In a virtualized environment, random I/O is the killer. If you are running a database like MySQL 5.5 or PostgreSQL 9.2, rotational latency will destroy your application's responsiveness.

We are seeing a massive shift towards Solid State Drives (SSD). While expensive, the IOPS (Input/Output Operations Per Second) advantage is undeniable. A standard SATA drive might give you 100 IOPS. An enterprise SSD in RAID-10 can push 20,000+ IOPS.

Pro Tip: If you are stuck on a legacy provider, switch your MySQL scheduler to `noop` or `deadline` instead of `cfq` to reduce overhead, though nothing beats raw SSD speed.

Data Sovereignty and The "Datatilsynet" Factor

For those of us operating in Norway, technical performance isn't the only metric. We have to consider the legal landscape. The Personopplysningsloven (Personal Data Act) places strict requirements on how we handle user data. Storing your customer database on a budget VPS in Texas might save you $5 a month, but it exposes you to legal liabilities under EU directives.

Furthermore, latency matters. If your user base is in Oslo or Bergen, routing traffic through Frankfurt or London adds unnecessary milliseconds. By hosting on local infrastructure connected directly to NIX (Norwegian Internet Exchange), you ensure that your packets stay local. Low latency isn't just about speed; it's about the "snappiness" of the user experience.

War Story: The Magento Migration

Last month, a client came to us with a Magento store running on a "High Memory" OpenVZ plan from a competitor. The site crashed every time they sent a newsletter. We migrated them to a CoolVDS KVM instance with 4GB RAM and SSD storage.

The Fix: We didn't just move files. We tuned the MySQL InnoDB buffer pool to utilize the dedicated RAM, knowing it wouldn't be stolen by a neighbor.

# /etc/my.cnf
[mysqld]
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2
query_cache_type = 0
query_cache_size = 0

Note: We disabled the Query Cache because on high-concurrency InnoDB setups, the mutex contention actually slows you down. This is a common mistake in default configurations.

Conclusion: Stop Renting Noise

In the systems administration world, predictability is worth more than raw burst speed. You cannot scale what you cannot predict. By choosing KVM virtualization, you are choosing kernel independence, security, and guaranteed resources.

Whether you are managing a complex LAMP stack or experimenting with the new Chef configuration management tools, you need a foundation that acts like real hardware.

Ready to ditch the noisy neighbors? Deploy a pure KVM SSD instance on CoolVDS today and see what 0.0% Steal Time feels like.