The "Burst RAM" Lie: Why We Bet on KVM
If I see one more hosting provider marketing "Burst RAM" as a feature, I might just throw a rack server out the window. It is October 2010, and the era of overselling OpenVZ containers needs to end.
We have all been there. You deploy a standard LAMP stack, traffic spikes, and suddenly your httpd processes lock up. You check top, but your load average looks fine. You check your memory, and you have "free" RAM. So what is breaking?
The answer is usually your neighbor. On legacy container technologies like Virtuozzo or OpenVZ, you are sharing a single Linux kernel with hundreds of other users. If one guy decides to compile a massive C++ project or run a fork bomb, your database latency goes through the roof. This is why at CoolVDS, we have standardized strictly on KVM (Kernel-based Virtual Machine).
The Kernel Bottleneck: OpenVZ vs. KVM
In a containerized environment (OpenVZ), you are at the mercy of the host kernel. You cannot load your own modules. Want to run a specific VPN config requiring tun/tap devices? You often have to beg support to enable it. Need to tweak sysctl.conf parameters for high-concurrency TCP connections? Good luck.
KVM is different. It uses the virtualization extensions found in modern CPUs (Intel VT-x and AMD-V) to act like real hardware. When you boot a CoolVDS instance, you are booting your own kernel. You are not in a chroot jail; you are running a virtual server.
The Proof is in the Drivers
The game-changer for KVM in 2010 has been the maturity of Virtio drivers. A few years ago, full virtualization was slow because the guest OS had to emulate an IDE disk controller. Now, with paravirtualized drivers, the guest OS knows it's virtualized and talks directly to the hypervisor.
Here is how you check if your current host is giving you raw performance or emulated garbage. Run this on your CentOS 5.5 or Debian Lenny box:
lsmod | grep virtio
# Output should look like this:
# virtio_pci
# virtio_ring
# virtio_net
# virtio_blk
If you don't see virtio_blk, your disk I/O is being emulated, and you are losing precious milliseconds on every database write.
Storage: The I/O Wait Killer
CPU cycles are cheap. Disk I/O is expensive. This is the golden rule of hosting. Most VPS providers cram 50 users onto a single SATA hard drive running at 7,200 RPM. It doesn't matter if you have 8 cores assigned to you; if the disk head is seeking for someone else's data, your CPU is sitting in iowait doing absolutely nothing.
Pro Tip: If you are lucky enough to be testing the new Solid State Drives (SSDs) or running on CoolVDS's new high-performance tiers, make sure you change your Linux I/O scheduler. The standard 'cfq' scheduler assumes a spinning disk. For SSDs/Flash storage, switch to 'noop' or 'deadline' to reduce CPU overhead.
Edit your /boot/grub/menu.lst and append this to your kernel line:
elevator=noop
Data Sovereignty in Norway (Personopplysningsloven)
Beyond raw performance, we have to talk about compliance. With the Datatilsynet (Data Protection Authority) becoming stricter about where Norwegian user data lives, relying on US-based hosts like Rackspace or EC2 is becoming a legal headache for enterprise clients. The Data Protection Directive (95/46/EC) implies distinct responsibilities for data controllers.
Hosting locally in Oslo isn't just about latency—though pinging the NIX (Norwegian Internet Exchange) in 2ms is nice. It is about knowing your data resides on physical disks within Norwegian borders, subject to Norwegian law, not the US PATRIOT Act.
Why We Built CoolVDS on KVM
We didn't choose KVM because it was easy. We chose it because it's honest. When you buy 512MB of RAM on a CoolVDS KVM slice, that RAM is reserved for you at the hypervisor level. It's not "burstable" memory that disappears when the host gets busy.
If you are tired of debugging random slow-downs and want a server that behaves like a real server, it is time to upgrade.
Stop fighting the noisy neighbors. Spin up a KVM instance with pure SSD storage on CoolVDS today and see what wa: 0.0% looks like in top.