Console Login

Why KVM is Killing Xen and OpenVZ: A Sysadmin’s Guide to Real Virtualization

Why KVM is Killing Xen and OpenVZ: A Sysadmin’s Guide to Real Virtualization

Let’s be honest. If you are running a high-traffic forum or a Magento e-commerce shop on a budget VPS, you have probably woken up at 3:00 AM to a crashed server. You check the logs. Nothing. You check memory usage. It looks fine. Then you realize the truth: your provider’s "guaranteed" RAM was a lie.

Welcome to the dirty world of oversold OpenVZ containers. For years, the hosting market has been flooded with cheap "slices" that share a single kernel. One neighbor gets DDoS’d or runs a fork bomb, and your database latency spikes through the roof. It’s 2009, and we are done with that.

At CoolVDS, we have shifted our entire infrastructure to KVM (Kernel-based Virtual Machine). Why? Because the Linux 2.6.20 kernel changed the game, turning the Linux kernel itself into a hypervisor. No more emulation overheads of QEMU (alone) and no more fake resource promises of containers.

The Architecture: Why KVM Wins on Hardware Support

Unlike Xen, which requires a modified dom0 kernel and can be a nightmare to patch, KVM is upstream. It’s in the mainline kernel. It uses hardware virtualization extensions (Intel VT-x or AMD-V) to give you a slice of hardware that actually acts like hardware.

Before you even think about deploying a virtualization host, you need to verify your CPU flags. If you don't see the output below, you are stuck in software emulation, which is too slow for production:

# Check for Hardware Virtualization support
egrep '(vmx|svm)' /proc/cpuinfo

# Output should look something like this:
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 lahf_lm

If you see vmx (Intel) or svm (AMD), you are ready for KVM.

War Story: The MySQL "Steal Time" Trap

Last month, we migrated a client from a competitor’s "Cloud" (a glorified OpenVZ cluster). They were running MySQL 5.1 on CentOS 5. Their complaints? "Random slowdowns."

We ran top inside their old container. The CPU usage looked low, but the site was crawling. Then we looked at the %st (steal time) column.

Cpu(s): 12.4%us,  3.1%sy,  0.0%ni, 45.2%id, 38.9%wa,  0.0%hi,  0.2%si,  0.2%st

Wait, that's not the log. In their case, %st was hitting 40%. This means the hypervisor was stealing CPU cycles from their VM to give to a noisy neighbor. In a containerized environment, you can't tune this away.

We moved them to a CoolVDS KVM instance. Because KVM treats the VM as a standard Linux process scheduled by the standard kernel scheduler (CFS), we could pin vCPUs and guarantee execution time. The result? Steal time dropped to 0.0%, and query execution time stabilized immediately.

Tuning KVM for I/O Performance

The biggest bottleneck in virtualization right now is disk I/O. Emulating an IDE drive is slow. If you are setting up KVM manually (or configuring your CoolVDS instance), you must use VirtIO drivers. These are para-virtualized drivers that let the guest OS know it's running in a VM, bypassing unnecessary emulation layers.

1. The Disk Definition

Don't use the default IDE bus. In your libvirt XML or command line, ensure you are using if=virtio.



  
  




  
  

2. The Scheduler

Inside your Linux guest (the VM), the default I/O scheduler is usually CFQ (Completely Fair Queuing). This is great for physical spinning rust, but inside a VM, the host is already scheduling disk access. Double scheduling kills latency.

Switch your guest to the noop or deadline scheduler for lower latency.

# Check current scheduler
cat /sys/block/vda/queue/scheduler
[cfq] deadline noop

# Switch to noop on the fly
echo noop > /sys/block/vda/queue/scheduler

# Make it permanent in /boot/grub/menu.lst
kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/VolGroup00/LogVol00 elevator=noop
Pro Tip: If you are lucky enough to be testing the new Intel X25-M SSDs (Solid State Drives), the noop scheduler is mandatory. The random access time on these new flash drives is so fast that the sorting logic of CFQ actually slows them down.

Network Latency: The NIX Factor

For our Norwegian clients, physics still applies. You can have the fastest KVM host in the world, but if your datacenter is in Texas, your ping to Oslo is going to be 120ms+. TCP window scaling can only do so much.

CoolVDS servers are located physically in Oslo, peering directly at NIX (Norwegian Internet Exchange). We see latencies as low as 2-5ms to local ISPs like Telenor and NextGenTel. When you are serving static assets or handling synchronous AJAX calls, that 100ms difference is noticeable to the end user.

Furthermore, hosting locally simplifies compliance with the Personal Data Act (Personopplysningsloven). Keeping customer data within Norwegian borders satisfies the Datatilsynet requirements without complex legal workarounds needed for US-based hosting.

The Storage Revolution: RAID-10 SAS vs. SSD

Most providers are still running SATA II drives in RAID-5. That’s a disaster for write performance (the parity calculation penalty). At CoolVDS, we use 15k RPM SAS drives in RAID-10 for our standard nodes.

However, for database-heavy workloads, we are rolling out limited High-Performance SSD storage nodes. We are seeing IOPS (Input/Output Operations Per Second) jump from ~150 on a standard drive to over 3,000 on these new flash units. It’s expensive, but if your business runs on MySQL, it’s worth every Krone.

Conclusion: Stop Sharing Your Kernel

Virtualization is maturing fast. Tools like virt-manager and virsh are making KVM manageable for the average sysadmin. But building a stable cluster requires careful attention to I/O schedulers, CPU flags, and upstream network providers.

If you want to spend your time coding PHP or configuring Apache, rather than debugging kernel panics caused by a noisy neighbor, you need a dedicated hypervisor.

Don't let legacy virtualization kill your uptime. Deploy a KVM instance with VirtIO drivers on CoolVDS today and feel the difference of raw hardware access.