KVM vs OpenVZ: Stop Gambling with Your Production Database Performance
Let’s be honest for a second. If you are still running a high-traffic Magento store or a critical MySQL backend on an OpenVZ container, you are not hosting; you are gambling. I have spent the last three weeks debugging a client's "unexplainable" downtime during peak traffic hours. The logs were clean. The memory usage was fine. But the site was crawling.
The culprit? Steal time.
On their budget VPS, a neighbor on the same physical node was compiling a kernel (or mining Bitcoin, who knows), and because OpenVZ shares the host kernel, my client's database was fighting for CPU cycles that simply weren't there. In the professional systems administration world, particularly here in Norway where reliability is currency, this is unacceptable.
The Architecture: Why KVM Wins in 2012
OpenVZ is essentially chroot on steroids. It’s lightweight, sure, but it relies on a shared kernel. If the host kernel panics, every single customer on that node goes down. Furthermore, resource isolation is soft. You are at the mercy of the ubc (User Bean Counters) limits, which can behave unpredictably under load.
KVM (Kernel-based Virtual Machine), on the other hand, is built directly into the Linux kernel (since 2.6.20). It turns the Linux kernel into a hypervisor. Each VM is a standard Linux process, scheduled by the standard Linux scheduler, with its own dedicated memory space, disk interface, and most importantly, its own kernel.
Identifying the Bottleneck
Before you migrate, check if your current environment is suffering from noisy neighbors. Run top and look at the %st (steal) column.
Cpu(s): 12.5%us, 4.2%sy, 0.0%ni, 65.0%id, 15.3%wa, 0.0%hi, 0.2%si, 2.8%st
If that last number (st) is consistently above 0%, your hypervisor is stealing cycles from you. In a KVM environment with dedicated cores—like the setup we enforce at CoolVDS—this should be near zero.
Optimizing KVM for I/O Heavy Workloads
The biggest myth about virtualization in 2012 is that "disk I/O is slow." This is only true if you are using emulated IDE drivers or spinning rust (HDDs). With the rise of solid-state storage and VirtIO drivers, the gap has closed significantly.
When provisioning a KVM instance (or asking your provider), you must ensure you are using paravirtualized drivers. The difference is night and day.
1. The Disk Driver Check
On a CentOS 6 or Ubuntu 12.04 LTS guest, check your loaded modules:
lsmod | grep virtio
You should see virtio_blk and virtio_net. If you are seeing ata_piix, you are running in legacy mode and losing performance.
2. Filesystem Tuning for SSDs
With CoolVDS deploying pure SSD arrays, you need to tell your Linux kernel that it doesn't need to write access times for every read operation. This reduces write amplification on the flash storage.
Edit your /etc/fstab:
# /etc/fstab
/dev/vda1 / ext4 errors=remount-ro,noatime,discard 0 1
The noatime flag stops the OS from writing a timestamp every time you read a file. The discard flag enables TRIM support, keeping the SSD performance high over time.
Database Tuning on KVM
Moving to KVM allows you to tune the TCP stack and database buffers without hitting the artificial limits found in OpenVZ /proc/user_beancounters. Here is a production-ready snippet for a 4GB RAM KVM instance running MySQL 5.5:
[mysqld]
# InnoDB is mandatory. MyISAM is dead.
default-storage-engine = InnoDB
# Dedicate 70-80% of RAM to the buffer pool on a dedicated DB server
innodb_buffer_pool_size = 3G
# Crucial for SSD performance
innodb_flush_neighbors = 0
innodb_io_capacity = 2000
# Data Integrity vs Speed (Set to 2 for higher speed, 1 for ACID compliance)
innodb_flush_log_at_trx_commit = 1
Pro Tip: If you are migrating from a physical server to KVM, lower yourswappiness. The default of 60 is too aggressive for a virtualized environment. Addvm.swappiness = 10to/etc/sysctl.conf.
The "CoolVDS" Factor: Hardware Matters
Software optimization can only get you so far. If the underlying spindle is a 7200 RPM SATA drive shared by 40 users, your iowait will skyrocket regardless of your config.
This is why we built CoolVDS on an all-SSD architecture. We don't mess around with hybrid caching or tiered storage. It is raw flash storage connected via high-speed RAID controllers. For clients in Norway, this hardware is located right here in Oslo, ensuring sub-millisecond latency to NIX (Norwegian Internet Exchange).
Legal Compliance in Norway (Datatilsynet)
Beyond raw IOPS, there is the legal aspect. Under the Norwegian Personal Data Act (Personopplysningsloven), you are responsible for where your customer data lives. Using cheap US-based VPS providers puts you in a grey area regarding the EU Data Protection Directive. By keeping your KVM instances on Norwegian soil with CoolVDS, you simplify compliance instantly.
Testing Disk Speed Yourself
Don't take my word for it. Run a dd test on your current host. If you aren't seeing at least 200 MB/s write speeds, your application is bottlenecked.
dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct
If your result is under 50 MB/s, you are likely on an overloaded node or spinning disks. It is 2012—there is no excuse for that kind of latency anymore.
The Final Verdict
OpenVZ was a great bridge technology, but for serious production workloads, KVM is the standard. It provides the isolation of a dedicated server with the flexibility of the cloud. When you combine that with local Norwegian peering and enterprise SSDs, the choice becomes obvious.
Don't let slow I/O kill your SEO rankings or frustrate your users. Deploy a true KVM instance on CoolVDS today and feel the difference dedicated resources make.