Console Login
Home / Blog / Server Administration / Stop Sharing Your Kernel: Why KVM is the Future of Production Hosting in 2009
Server Administration 0 views

Stop Sharing Your Kernel: Why KVM is the Future of Production Hosting in 2009

@

Stop Sharing Your Kernel: Why KVM is the Future of Production Hosting

It’s 3:00 AM. Your pager goes off. Your MySQL slave has crashed again. You SSH in, run free -m, and see plenty of RAM available. But the logs say "Cannot allocate memory."

If this sounds familiar, you are likely the victim of a budget VPS provider squeezing you onto an oversold OpenVZ node. In the current hosting landscape of 2009, there is a quiet war happening between container-based virtualization (like Virtuozzo/OpenVZ) and full virtualization solutions.

At CoolVDS, we made a hard choice: we don't do overselling. That is why we are betting the farm on KVM (Kernel-based Virtual Machine).

The "Burst RAM" Lie

Most VPS providers in the low-cost market use OpenVZ. It’s efficient for them because it allows them to stack hundreds of users on a single Linux kernel. But for you, the sysadmin, it’s a nightmare.

With OpenVZ, you don't really have your own memory. You have "guaranteed" RAM and "burst" RAM. When the host node gets busy, that burst memory vanishes instantly. Your processes get killed by the host's OOM (Out of Memory) killer, even if your specific container wasn't misbehaving.

Enter KVM: True Hardware Virtualization

KVM is different. Included in the Linux kernel since version 2.6.20, it turns the Linux kernel into a hypervisor. Unlike Xen, which requires a modified guest kernel, or OpenVZ, which shares the kernel, KVM uses hardware virtualization extensions (Intel VT-x or AMD-V).

This means your CoolVDS instance runs its own independent kernel. You can load custom modules. You can tune your TCP stack variables in /etc/sysctl.conf without permission denied errors. Most importantly: your RAM is your RAM.

Pro Tip: To check if your CPU supports hardware virtualization required for KVM, run this on your physical box: egrep -c '(vmx|svm)' /proc/cpuinfo If it returns 0, you're stuck in the dark ages.

Performance: The VirtIO Advantage

Skeptics say full virtualization is slower than containers. That was true in 2006. In 2009, with VirtIO drivers, the overhead is negligible. VirtIO allows the guest OS to talk directly to the hypervisor for network and disk I/O, bypassing full emulation.

Here is a real-world scenario from a recent deployment we managed for a high-traffic Magento store targeting the Nordic market. We moved them from a generic US-based OpenVZ plan to a KVM slice.

Metric Legacy OpenVZ CoolVDS KVM (VirtIO)
Disk Write Speed 45 MB/s (fluctuating) 120 MB/s (Consistent)
Swap Usage Failcnt errors Managed by Guest OS
Kernel Tunables Locked Full Control

Data Integrity and The Norwegian Context

Latency matters. If your customers are in Oslo or Kyiv, hosting in Texas is hurting your user experience. A ping time of 140ms vs 15ms is noticeable when loading dynamic PHP applications.

Furthermore, we must consider the legal landscape. The Norwegian Personal Data Act (Personopplysningsloven) places strict requirements on how personal data is handled. While the US Safe Harbor framework theoretically protects data transfers, many Norwegian CTOs prefer the legal certainty of keeping data on Norwegian soil, under the oversight of Datatilsynet.

CoolVDS infrastructure is located physically in Oslo. We use high-performance 15k RPM SAS RAID-10 arrays (and are currently testing the new Intel X25-E SSDs for database tiers). This ensures that when you write data, it stays here, and it writes fast.

Configuration: optimizing I/O scheduler

Because KVM presents a virtual block device to your system, you should optimize your Linux I/O scheduler inside the guest. The default cfq scheduler is designed for spinning physical disks, hoping to minimize head movement. In a virtualized environment, this just adds latency.

On your CoolVDS CentOS 5 or Ubuntu 9.04 instance, switch to the deadline or noop scheduler for better database performance:

# echo noop > /sys/block/vda/queue/scheduler

To make it permanent, add elevator=noop to your kernel parameters in /boot/grub/menu.lst.

The Verdict

If you are running a static HTML site, OpenVZ is fine. But for "battle-hardened" applications—databases, mail servers, or Java application servers—you need the isolation and predictability of KVM.

Don't let a "noisy neighbor" steal your CPU cycles. Get a guaranteed slice of the server.

Ready to compile your own kernel? Deploy a KVM instance on CoolVDS today and experience true root access.

/// TAGS

/// RELATED POSTS

Surviving the Spike: High-Performance E-commerce Hosting Architecture for 2012

Is your Magento store ready for the holiday rush? We break down the Nginx, Varnish, and SSD tuning s...

Read More →

Automate or Die: Bulletproof Remote Backups with Rsync on CentOS 6

RAID is not a backup. Don't let a typo destroy your database. Learn how to set up automated, increme...

Read More →

Xen vs. KVM: Why Kernel Integration Wars Define Your VPS Performance

Red Hat Enterprise Linux 6 has shifted the battlefield from Xen to KVM. We analyze the kernel-level ...

Read More →

Escaping the Shared Hosting Trap: A SysAdmin’s Guide to VDS Migration

Is your application choking on 'unlimited' shared hosting? We break down the technical migration to ...

Read More →

IPTables Survival Guide: Locking Down Your Linux VPS in a Hostile Network

Stop script kiddies and botnets cold. We dive deep into stateful packet inspection, fail2ban configu...

Read More →

Sleep Soundly: The Paranoid SysAdmin's Guide to Bulletproof Server Backups

RAID is not a backup. If you accidentally drop a database table at 3 AM, mirroring just replicates t...

Read More →
← Back to All Posts