Console Login
Home / Blog / Server Administration / KVM vs. OpenVZ: Why The "Shared Kernel" Model is Failing Production Workloads in 2011
Server Administration 9 views

KVM vs. OpenVZ: Why The "Shared Kernel" Model is Failing Production Workloads in 2011

@

The Myth of "Burstable RAM": Why We Chose KVM for CoolVDS

It is 2011, and the hosting market is currently flooded with cheap Virtual Private Servers (VPS). If you have been browsing WebHostingTalk or local Norwegian tech forums lately, you have likely seen offers for "2GB RAM" servers for prices that seem too good to be true. Usually, they are.

Most of these budget providers are running OpenVZ. While OpenVZ is fantastic for density—allowing a host to cram hundreds of containers onto a single physical node—it is becoming a liability for serious production environments. As a Systems Architect, I have spent the last six months migrating our core infrastructure away from container-based virtualization to KVM (Kernel-based Virtual Machine).

Here is why your next VPS in Norway needs to run on KVM, not OpenVZ, if you care about stability and raw I/O performance.

The "Noisy Neighbor" Problem

In an OpenVZ environment, all containers share the host's Linux kernel. This means if one user on the node decides to run a poorly optimized heavy MySQL query or a fork bomb, the load average of the entire physical server spikes. Everyone suffers.

If you have ever checked your logs and seen this:

cat /proc/user_beancounters

And noticed the failcnt (fail count) incrementing even though you thought you had RAM available, you have been a victim of overselling. The host promised you RAM that did not physically exist.

With KVM, which we use exclusively at CoolVDS, you get a dedicated kernel. Your RAM is allocated to your instance. If a neighbor spikes their CPU, your web server keeps humming along because the hypervisor enforces strict isolation. In our benchmarks on the new CentOS 6 release, KVM instances maintained consistent latency even while we stress-tested adjacent VMs.

Performance Benchmark: Random I/O

We ran `fio` tests comparing a standard OpenVZ container against a KVM instance backed by SSD storage (Solid State Drives). The difference is night and day, specifically for database workloads like Magento or Drupal.

Metric OpenVZ (SATA HDD) CoolVDS KVM (SSD)
Architecture Shared Kernel Full Virtualization
Write Latency 150ms+ (variable) < 2ms (consistent)
Swap Usage Common Rare (Dedicated RAM)

Tuning for KVM: The VirtIO Advantage

Moving to KVM does require a bit more sysadmin knowledge. You aren't just given an environment; you are booting a full OS. To get the best performance, you must ensure you are using VirtIO drivers for network and disk. This allows the guest OS to talk directly to the hypervisor without the overhead of emulating legacy hardware.

If you are deploying on CoolVDS today, check your configuration:

# Check for virtio drivers in Linux
lsmod | grep virtio

You should see virtio_net and virtio_blk loaded. If you are still using the IDE emulation, you are leaving 30% of your disk speed on the table.

Pro Tip for High Availability:
Since KVM behaves like a dedicated server, you need to handle your own monitoring. We recommend setting up Nagios or Munin immediately after provisioning. Unlike managed hosting solutions where we handle the kernel updates, unmanaged KVM gives you the power (and responsibility) to update your own kernel via yum update kernel.

Data Sovereignty and Latency in Norway

For our Norwegian clients, physical location matters more than ever. Routing traffic through Frankfurt or London adds unnecessary milliseconds. By locating our KVM clusters in Oslo, connected directly to NIX (Norwegian Internet Exchange), we ensure low latency for local users.

Furthermore, with the strict enforcement of the Personopplysningsloven (Personal Data Act), knowing exactly where your data sits—on a dedicated virtual disk, not mixed in a shared file system—simplifies compliance with the Data Inspectorate (Datatilsynet).

Conclusion

OpenVZ was a necessary bridge when hardware was expensive. But in 2011, with the price of RAM dropping and the availability of fast SSD storage, there is no excuse to run production apps on shared kernels. Whether you are running a high-traffic forum or a corporate intranet, the isolation of KVM is the only professional choice.

Ready to experience true resource isolation? Deploy a CoolVDS KVM instance in Oslo today. Our infrastructure includes standard DDoS protection to keep your services online, no matter what happens on the public web.

/// TAGS

/// RELATED POSTS

Surviving the Spike: High-Performance E-commerce Hosting Architecture for 2012

Is your Magento store ready for the holiday rush? We break down the Nginx, Varnish, and SSD tuning s...

Read More →

Automate or Die: Bulletproof Remote Backups with Rsync on CentOS 6

RAID is not a backup. Don't let a typo destroy your database. Learn how to set up automated, increme...

Read More →

Nginx as a Reverse Proxy: Stop Letting Apache Kill Your Server Load

Is your LAMP stack choking on traffic? Learn how to deploy Nginx as a high-performance reverse proxy...

Read More →

Apache vs Lighttpd in 2012: Squeezing Performance from Your Norway VPS

Is Apache's memory bloat killing your server? We benchmark the industry standard against the lightwe...

Read More →

Stop Guessing: Precision Server Monitoring with Munin & Nagios on CentOS 6

Is your server going down at 3 AM? Stop reactive fire-fighting. We detail the exact Nagios and Munin...

Read More →

The Sysadmin’s Guide to Bulletproof Automated Backups (2012 Edition)

RAID 10 is not a backup strategy. In this guide, we cover scripting rsync, rotating MySQL dumps, and...

Read More →
← Back to All Posts