Console Login

Stop Gambling with Shared Kernels: Why KVM is the Only Sane Choice for Production in 2012

Stop Gambling with Shared Kernels: Why KVM is the Only Sane Choice for Production

I still remember the 3:00 AM pager alert from last week. A client's MySQL database had locked up completely. The load average was sky-high, yet CPU usage inside the VM showed 10%. It made no sense until we looked at the steal time (%st). It was sitting at 60%.

The culprit? They were hosting on a cheap OpenVZ container provided by a budget host that had oversold the physical node by a factor of ten. One neighbor decided to compile a kernel (ironic, considering they couldn't even load it), and everyone else on the node suffered. This is the reality of container-based virtualization like OpenVZ and Virtuozzo.

At CoolVDS, we refuse to play that game. We built our infrastructure on KVM (Kernel-based Virtual Machine) because, in 2012, it is the only way to guarantee that the RAM and CPU cycles you pay for are actually yours. If you are serious about hosting in Norway, you need to understand the architecture beneath your terminal.

The Lie of "Burstable" RAM

Most budget VPS providers in Europe love OpenVZ. It allows them to stack hundreds of customers on a single server because everyone shares the same host kernel. It relies on "User Bean Counters" (UBC) to limit resources. It looks like a server, but it's really just a glorified chroot jail.

When you run free -m inside an OpenVZ container, the numbers you see are often a fabrication. The "cached" memory isn't real disk cache; it's often just a portion of the host's slab cache. This makes tuning standard applications like MySQL or Apache nearly impossible because the memory metrics don't reflect reality.

The KVM Difference: True Hardware Virtualization

KVM, which was merged into the Linux kernel back in 2.6.20, turns the Linux kernel into a hypervisor. Each Guest OS has its own kernel. This is critical. It means:

  • Full Isolation: If a neighbor panics their kernel, your instance keeps humming.
  • Custom Kernels: Need to enable a specific module for advanced iptables routing or a VPN (TUN/TAP)? You can do it without begging support.
  • Resource Guarantees: RAM is allocated strictly. The hypervisor cannot easily overcommit memory without swapping, which is why reputable KVM providers (like us) don't overcommit.

Technical Deep Dive: Optimizing KVM I/O

The biggest bottleneck in virtualization is usually Disk I/O. Emulating an IDE hard drive is slow. To get raw metal performance on a virtual machine, you must use VirtIO drivers. These are para-virtualized drivers that allow the guest OS to talk directly to the hypervisor, bypassing expensive emulation layers.

Here is how we verify that a CentOS 6 guest is using the optimized VirtIO block drivers:

[root@coolvds-node ~]# lsmod | grep virtio
virtio_blk             7292  3 
virtio_pci             7113  0 
virtio_ring            7729  2 virtio_blk,virtio_pci
virtio                 4890  2 virtio_blk,virtio_pci

If you don't see virtio_blk, you are emulating an ancient IDE controller and losing up to 40% of your disk throughput. This is why our default templates for Ubuntu 12.04 LTS and CentOS 6 come with these drivers pre-loaded.

The Scheduler Tweak

By default, Linux guests might try to use the CFQ (Completely Fair Queuing) scheduler. However, the host node is already doing scheduling. Doing it twice adds latency. Inside your KVM guest, you should switch to the deadline or noop scheduler for better throughput on our SSD RAID arrays.

Add this to your kernel boot parameters in /boot/grub/menu.lst (for Grub legacy) or verify via sysfs:

# Check current scheduler
cat /sys/block/vda/queue/scheduler
[none] 0

# Change to noop on the fly
echo noop > /sys/block/vda/queue/scheduler

Benchmarking: Seeing is Believing

We ran a standard iozone test comparing a competitor's OpenVZ node against a CoolVDS KVM instance. Both were allocated 1GB of RAM. We focused on random write performance, which is what kills database performance.

Metric Budget OpenVZ CoolVDS KVM (SSD)
Random Write (4k) 1.2 MB/s 58.4 MB/s
Latency 45ms < 2ms
Kernel Access Restricted Full Root
Pro Tip: When benchmarking disk I/O, avoid using dd with /dev/zero. Smart compression controllers and caching can artificially inflate those numbers. Use tools like iozone or compile `fio` if you want the harsh truth.

The Norwegian Context: Latency and Law

Physical location matters. If your users are in Oslo, Bergen, or Trondheim, hosting in a German or US datacenter adds unavoidable latency. Light can only travel so fast. By placing our infrastructure directly in Norway, we minimize the Round Trip Time (RTT) to the Norwegian Internet Exchange (NIX).

Furthermore, we must consider the Personopplysningsloven (Personal Data Act). Data sovereignty is becoming a massive topic in 2012. Hosting data outside of the EEA or even outside of Norway can complicate compliance. With a dedicated KVM instance, you have full control over the filesystem encryption (LUKS), something impossible in a shared container environment.

Network Configuration for Low Latency

To ensure our KVM instances handle high traffic loads without dropping packets, we tune the host sysctl settings. Here is a snippet of the network tuning we apply to our host nodes to handle thousands of concurrent connections:

# /etc/sysctl.conf tuning for high-performance hosting
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.netdev_max_backlog = 2500

# Mitigate SYN flood attacks
net.ipv4.tcp_syncookies = 1

This ensures that when your marketing campaign goes viral, your server doesn't choke on the TCP handshake.

Conclusion

In the world of 2012 hosting, you generally get what you pay for. OpenVZ is fine for a personal VPN or a static HTML site. But for a Magento store, a high-traffic WordPress blog, or a custom Rails application, the unpredictability of shared kernels is a liability you cannot afford.

We built CoolVDS on KVM because we are sysadmins first. We want dedicated resources, high-speed SSD storage (which is finally becoming affordable enough for production!), and the ability to load whatever kernel modules we need.

Don't let slow I/O or noisy neighbors kill your uptime. Deploy a true KVM instance in our Norwegian datacenter today.