Console Login

KVM vs. OpenVZ: Why 'Guaranteed' RAM is a Lie & How to Tune KVM for Production in 2012

The "Guaranteed Resources" Myth: Why Serious Architects Choose KVM

It is 2012, and yet I still see "Enterprise" hosting providers pushing OpenVZ containers as if they were dedicated servers. If you have ever watched your load average spike while your traffic remained flat, you have been a victim of the "noisy neighbor" effect. In the shared kernel model of OpenVZ, memory is often a soft limit, and CPU time is a suggestion, not a promise.

For a hobby blog, that is fine. For a Magento storefront or a high-traffic media site targeting the Norwegian market, it is negligence. At CoolVDS, we have standardized on KVM (Kernel-based Virtual Machine) for a simple reason: isolation. When we allocate a core, it is yours. But simply moving to KVM isn't enough; you have to tune it.

In this post, I’m going to walk you through the exact optimization stack I used to rescue a client's database server last week after their previous "cloud" host choked during a simple newsletter blast.

The I/O Bottleneck: Why HDD is Dead for Databases

The biggest bottleneck in virtualization today is storage I/O. With the price of RAM stabilizing, disk latency is the new enemy. If you are still running MySQL on spinning SAS disks, you are fighting a losing battle against latency.

We are currently seeing the industry transition to SSD storage. While the new NVMe storage specification (version 1.0 released last year) promises to revolutionize how we talk to flash memory in the future, right now, high-performance SATA/SAS SSDs in RAID-10 are the absolute gold standard for production throughput.

Optimization #1: The Scheduler

By default, CentOS 6 and Ubuntu 12.04 use the cfq (Completely Fair Queuing) scheduler. This is designed to minimize head seek time on spinning platters. On a VPS backed by SSDs, this logic is redundant and actually adds latency.

You need to switch to deadline or noop. The noop scheduler is often best for KVM guests because the hypervisor handles the physical disk scheduling. Here is how you check your current setting:

$ cat /sys/block/vda/queue/scheduler
noop anticipatory deadline [cfq]

If you see [cfq] selected, change it immediately:

# echo noop > /sys/block/vda/queue/scheduler

To make this permanent, add it to your kernel parameters in /boot/grub/menu.lst (for older GRUB) or /etc/default/grub by appending elevator=noop.

Database Tuning for Virtualized Environments

Last month, during the heavy traffic following the Altinn downtime news, one of our clients saw their MySQL 5.5 instance lock up. The issue wasn't CPU; it was I/O wait. The default InnoDB settings assume a physical server with a slow disk.

Here is the my.cnf configuration block I deploy on CoolVDS instances to leverage our high-speed storage:

[mysqld]
# Set this to 70-80% of your VPS RAM
innodb_buffer_pool_size = 4G

# Crucial for avoiding double buffering in the OS
innodb_flush_method = O_DIRECT

# Default is 4MB, too small for heavy write loads
innodb_log_file_size = 512M

# Stop InnoDB from being shy about using I/O capacity
innodb_io_capacity = 2000
innodb_read_io_threads = 8
innodb_write_io_threads = 8
Pro Tip: Always disable atime on your file system mounts. There is no reason your server should write to the disk just to say it read a file. Edit /etc/fstab and add noatime,nodiratime to your root partition options.

Data Sovereignty: The Norwegian Context

Latency isn't just about disk speed; it's about network topology. If your customers are in Oslo, routing traffic through a data center in Frankfurt or Amsterdam adds measurable milliseconds. More importantly, we are seeing stricter enforcement from Datatilsynet (The Norwegian Data Inspectorate) regarding where personal data lives.

Under the Personal Data Act (Personopplysningsloven), you are responsible for your users' data. Hosting outside the EEA or on non-compliant US-owned clouds (Safe Harbor notwithstanding) introduces legal gray areas. Hosting on CoolVDS servers physically located in Norway ensures you are covered by Norwegian law, with low latency access to the NIX (Norwegian Internet Exchange).

Verifying Your Virtualization

Not sure if your current host is lying to you about "Dedicated Resources"? Check for yourself. If you are on a true KVM hypervisor, you should see the hardware passed through correctly.

$ dmesg | grep -i kvm
[    0.000000] KVM: All CPU(s) started

If you run free -m and see memory usage that doesn't make sense, or if /proc/user_beancounters exists, you are in an OpenVZ container. Get out.

Feature OpenVZ (Budget Hosts) KVM (CoolVDS)
Kernel Shared with Host Isolated / Custom
Performance Inconsistent Guaranteed
Swap Fake / Burstable Real Partition
Privacy Host can see processes Full encryption possible

The Verdict

In 2012, hardware is cheap enough that there is no excuse for overselling. High-performance hosting requires a combination of rigorous kernel tuning and the right underlying architecture.

If you are tired of debugging "ghost" latency issues and want a platform that respects the O_DIRECT flag, it’s time to upgrade.

Don't let slow I/O kill your SEO. Deploy a test KVM instance on CoolVDS in 55 seconds and see the difference a real kernel makes.