Console Login

Why OpenVZ is Killing Your Database: The Case for KVM in 2010

Why OpenVZ is Killing Your Database: The Case for KVM in Production

If I have to debug one more MySQL crash caused by a hosting provider's "burstable RAM" limit, I might just move to a cabin in Finnmark and disconnect from the NIX entirely. It is 2010. We are building sophisticated web applications with Magento and Drupal, yet so many systems administrators are still being sold the lie of container-based virtualization for high-performance workloads.

Here is the brutal truth: if you are running a database-heavy application on OpenVZ or Virtuozzo, you are sharing your kernel with a hundred other users. When their PHP scripts go haywire, your latency spikes. In this industry, we call it the "Noisy Neighbor" effect, but for your uptime, it is a death sentence. At CoolVDS, we grew tired of seeing clients suffer from this, which is why we standardized on KVM (Kernel-based Virtual Machine) for all serious deployments.

The "Overselling" Problem: A War Story

Last month, I was consulting for a mid-sized e-commerce shop based here in Oslo. They were running a standard LAMP stack (Ubuntu 10.04 LTS, MySQL 5.1) on a "High Performance" VPS from a generic European provider. Every evening at roughly 20:00, their disk I/O wait (`iowait`) would skyrocket to 40%, and MySQL would lock up. The site didn't just slow down; it threw 503 errors.

We checked everything. We tuned the `innodb_buffer_pool_size`. We optimized the queries. Nothing worked. Why? Because the issue wasn't inside the container. It was the host node. The provider had oversold the physical disk I/O, and another tenant on that physical server was running a backup script that saturated the RAID controller.

On a container-based system like OpenVZ, you have zero isolation from this. You are at the mercy of the scheduler. With KVM, the virtualization happens at the hardware level. The OS thinks it is hardware. You get dedicated allocation.

The KVM Difference: Hardware Virtualization

KVM turns the Linux kernel into a hypervisor. It requires a processor with hardware virtualization extensions (Intel VT or AMD-V). Unlike containers, which share the host's kernel, a KVM guest has its own kernel. This means you can load your own modules, set your own strict `sysctl` parameters, and most importantly, your memory is allocated, not "burstable" (which is marketing speak for "you might get it if nobody else needs it").

To verify if your current hardware supports KVM (if you are building your own node), you look for the flags:

egrep -c '(vmx|svm)' /proc/cpuinfo

If you get a 0, you are stuck. If you get a 1 or more, you are ready for real virtualization. Once KVM is installed, you are managing distinct processes. Here is how we verify a running KVM instance is actually using the hardware extensions correctly on a host node:

lsmod | grep kvm
kvm_intel              47162  0
kvm                   292815  1 kvm_intel

Tuning Linux for KVM Performance

Just switching to KVM isn't the magic bullet; you have to configure the guest OS to know it is virtualized. One of the biggest bottlenecks in 2010 is disk I/O. By default, Linux uses the `cfq` (Completely Fair Queuing) scheduler. This is great for physical spinning disks, but inside a VM, the host is already handling the scheduling. Using `cfq` twice (once in guest, once in host) adds unnecessary latency.

For our CoolVDS KVM instances running on our new SSD arrays, we force the guest OS to use the `noop` or `deadline` scheduler. This tells the guest kernel: "Don't try to be smart, just send the I/O request to the hypervisor immediately."

Here is how you apply this in Ubuntu 10.04 (Grub 2):

# /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=noop"

Then run `update-grub` and reboot. You can verify it instantly without rebooting for testing:

echo noop > /sys/block/sda/queue/scheduler
cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

Optimizing MySQL 5.1 for KVM

When you have guaranteed RAM (thanks to KVM), you can aggressively tune InnoDB. On a 4GB RAM VPS, leaving the defaults is criminal. You want to allocate roughly 70-80% of RAM to the buffer pool to prevent disk swapping, provided you don't have other heavy processes like Apache preforks eating it all.

[mysqld]
# /etc/mysql/my.cnf

# The most important setting for InnoDB performance
innodb_buffer_pool_size = 2G

# Prevent double buffering since OS does file caching too
innodb_flush_method = O_DIRECT

# If you have RAID10 with battery-backed cache (like we do), you can trust this
innodb_flush_log_at_trx_commit = 2
Pro Tip: Be careful with `innodb_flush_log_at_trx_commit = 2`. It gives you a massive speed boost, but if the OS crashes, you might lose the last second of transactions. On a stable KVM platform like CoolVDS with redundant power, this risk is minimal compared to the performance gain.

Data Sovereignty: The Norwegian Context

We are seeing tighter regulations regarding where data lives. The Personopplysningsloven (Personal Data Act) and the EU Data Protection Directive require strict control over customer data. When you use cheap US-based VPS providers, you are operating in a grey area regarding Safe Harbor.

Hosting in Norway isn't just about latency—though pinging 127.0.0.1 (localhost) is the only thing faster than our connection to NIX—it's about compliance. Datatilsynet is becoming more active. Ensuring your physical server is located in Oslo means your data remains under Norwegian jurisdiction, safe from foreign subpoena powers that affect US-owned data centers.

Comparison: OpenVZ vs. KVM (CoolVDS)

Feature OpenVZ / Containers KVM (CoolVDS Standard)
Kernel Shared (Old 2.6.18 often) Dedicated (Run latest kernels)
RAM Allocation Burstable (Oversold) Hard Reserved
Disk I/O Contended / Unprotected Isolated / Fair
Swap Fake / Non-existent Real Partition
Stability Prone to "OOM Killer" Enterprise Grade

Why We Chose KVM for CoolVDS

We could have made more money using Virtuozzo. We could have crammed 50 clients onto a single server and hoped they didn't all get traffic at once. But we are engineers, not salesmen. We built the platform we wanted to use.

Our infrastructure utilizes RAID10 Enterprise SAS and the new generation of SSDs for caching tiers, ensuring that when your application asks for data, it gets it instantly. We combine this with upstream KVM to ensure that when you buy 4GB of RAM, you get 4GB of RAM. No asterisks. No "burst" marketing.

If your `top` command shows high load average but low CPU usage, you are suffering from I/O wait caused by your neighbors. Stop debugging a problem you cannot fix.

Is your site lagging? Verify your I/O wait today. If it's over 10%, it's time to migrate. Spin up a true KVM instance on CoolVDS and feel the difference raw, dedicated hardware makes.