Console Login

KVM Virtualization: Why Shared Kernels Are Killing Your Production Performance

KVM Virtualization: The End of "Noisy Neighbors" in Production

Let’s be honest for a moment. If I see one more hosting provider in Oslo selling "Guaranteed RAM" on an OpenVZ node while overselling their physical memory by 400%, I’m going to lose it. In the world of high-availability hosting, "burstable" resources are just marketing speak for "your server will crash when you need it most."

I’ve spent the last six months migrating our core infrastructure from legacy container-based solutions to Kernel-based Virtual Machine (KVM) hypervisors. The difference isn't just in the benchmarks; it’s in the sleep I’m finally getting at night. If you are running a Magento store or a high-traffic Drupal site in 2011, relying on a shared kernel is a ticking time bomb.

The "Privvmpages" Nightmare: A War Story

Two years ago, I was managing a deployment for a mid-sized Norwegian e-commerce retailer. They were hosting on a standard VPS provider that used OpenVZ. Everything looked fine on the dashboard. Load average was 0.4. RAM usage was reported as 512MB out of 2GB.

Yet, every day at 14:00 CET, MySQL would silently die.

The logs showed nothing. No OOM (Out of Memory) killer in the syslog. Why? Because in a containerized environment like OpenVZ, the kernel isn't yours. It's shared. The "beancounters" file tells the real truth.

cat /proc/user_beancounters

When we ran this, we saw the `failcnt` (failure count) for `privvmpages` skyrocketing. The host node ran out of memory, and because our container was "bursting," the hypervisor ruthlessly cut us off without even logging it inside our instance. That is the reality of shared hosting architectures.

Why KVM is the Architecture of Choice for 2011

This is why we built the new CoolVDS infrastructure on KVM. Unlike containers, KVM turns the Linux kernel into a hypervisor. Each Guest OS gets its own kernel, its own memory space, and most importantly, true isolation.

1. The I/O Scheduler Advantage

On a shared kernel, you are stuck with whatever disk scheduler the host uses (usually CFQ). With KVM, you can tune your guest OS to use `deadline` or `noop`, which is critical when running on the high-speed SSD RAID arrays we use.

Here is how we configure our CentOS 6 KVM guests for maximum database throughput:

# /etc/grub.conf optimization
kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=/dev/mapper/vg-root elevator=noop

By setting the elevator to `noop`, we let the hypervisor (and the hardware RAID controller) handle the scheduling logic, reducing CPU overhead inside the VM.

Optimizing MySQL 5.5 on KVM

With the release of MySQL 5.5, InnoDB is finally the default engine. This is a game-changer for data integrity, but it requires specific tuning that is often impossible on constrained shared environments. On a CoolVDS KVM instance, we have full control over the `my.cnf` to leverage the dedicated RAM.

Here is the configuration we use for a 4GB RAM node to ensure stable performance without swapping:

[mysqld]
# Basic Settings
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql

# InnoDB Optimization for 4GB System
# allocated approx 70% of RAM to buffer pool
innodb_buffer_pool_size = 2G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

# Thread Concurrency
# Set this to 2 * (Num CPUs) + 2
innodb_thread_concurrency = 6

Latency and Sovereignty: The Norwegian Context

Latency matters. If your customers are in Oslo, Bergen, or Trondheim, routing traffic through Frankfurt or London adds unnecessary milliseconds. But beyond speed, there is the legal aspect.

With the current climate regarding the US Patriot Act, many Norwegian businesses are rightfully concerned about data sovereignty. Hosting your data on servers physically located in Norway, governed by the Personopplysningsloven (Personal Data Act), provides a layer of legal compliance that US-based clouds cannot guarantee. CoolVDS ensures your data stays within the NIX (Norwegian Internet Exchange) infrastructure.

Pro Tip: Use `mtr` (My Traceroute) instead of `ping` to diagnose packet loss. A single ping tells you nothing. `mtr --report --cycles 10 10.0.0.1` gives you a statistical average of the route stability over time.

Nginx + PHP-FPM: The High-Performance Stack

To get the most out of KVM, you should ditch Apache's `mod_php` for Nginx and PHP-FPM. Apache consumes too much RAM per connection. In 2011, Nginx is stable enough for production and handles concurrency far better.

Below is a production-ready Nginx virtual host configuration for a high-traffic site. Notice the buffer adjustments to prevent temporary file writes to disk:

server {
    listen 80;
    server_name example.no;
    root /var/www/html/example;
    index index.php index.html;

    # Buffer size optimizations
    client_body_buffer_size 128k;
    client_max_body_size 10m;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 4k;
    output_buffers 1 32k;
    postpone_output 1460;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        
        # Tune timeouts for long scripts
        fastcgi_connect_timeout 60;
        fastcgi_send_timeout 180;
        fastcgi_read_timeout 180;
    }

    # Static file caching
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        access_log off;
        log_not_found off;
        expires 30d;
    }
}

The Verdict: Performance is Predictability

We don't just sell hosting; we sell predictability. When you deploy on CoolVDS, you aren't fighting for CPU cycles with a neighbor running a runaway script. You get dedicated cores, SSD-backed storage for low latency I/O, and the autonomy of a full KVM kernel.

Don't let legacy virtualization hold back your infrastructure. It's time to professionalize your stack.

Ready to test real isolation? Deploy a KVM instance in our Oslo data center today.