Stop Playing Russian Roulette with "Burst RAM": Why KVM is the Future of Norwegian VPS
If you have ever stared at a terminal window at 3:00 AM wondering why your perfectly optimized Apache server just segfaulted, only to find a rising count in /proc/user_beancounters, you know the pain. You are likely running on OpenVZ or Virtuozzo, and your hosting provider is lying to you about your resources.
It is 2009. The era of oversold shared hosting masquerading as "Virtual Private Servers" needs to end. For serious System Administrators running heavy LAMP stacks or Java Tomcat applications, "burst resources" are not a feature—they are a liability.
At CoolVDS, we are betting the farm on KVM (Kernel-based Virtual Machine). Here is why you should too.
The "Noisy Neighbor" Problem
Most budget VPS providers in Europe today use container-based virtualization like OpenVZ. It’s efficient for them because they can cram 500 customers onto a single physical server. They promise you 512MB of RAM, but it is often shared. When neighbor #432 gets Digged (or Slashdotted), your database performance tanks because the host kernel is thrashing.
In a production environment, consistency beats theoretical peak speed. You cannot explain to a client that their Magento store is offline because another user on the node is compiling a kernel.
Why KVM Changes the Game
Included in the Linux kernel since version 2.6.20, KVM turns the Linux kernel itself into a hypervisor. Unlike containers, KVM provides true hardware virtualization. If you are allocated 1GB of RAM, that memory is locked to your instance. The host OS cannot steal it back to service a neighbor.
War Story: The Tomcat Crash
Last month, we migrated a client from a generic German host to our Oslo facility. They were running a Java-based booking system. On their old OpenVZ slice, the JVM would crash randomly. The logs showed nothing but generic memory errors.
The culprit? The privvmpages limit. The container technology was counting memory allocation differently than the JVM, triggering the host's Out-Of-Memory (OOM) killer even though the guest OS thought it had free RAM.
We moved them to a KVM slice on CoolVDS. The result? Uptime: 24 days and counting. No magic, just true isolation.
Optimizing KVM for Performance in 2009
KVM is not perfect out of the box. To get bare-metal speeds, you need to bypass emulation overhead using VirtIO drivers. This allows the guest OS to talk directly to the hypervisor for network and disk I/O.
If you are setting up a KVM instance (or using ours), ensure your disk bus is set to VirtIO, not IDE. In your libvirt XML or qemu command, it makes a massive difference:
# Bad (Emulated IDE)
-drive file=/var/lib/kvm/disk.img,if=ide
# Good (Para-virtualized VirtIO)
-drive file=/var/lib/kvm/disk.img,if=virtio
Pro Tip for Database Servers: If you are running MySQL 5.0 or 5.1 on KVM, change your I/O scheduler from `cfq` to `deadline` inside the guest VM. This prevents the guest from re-ordering requests that the host RAID controller is already optimizing.
echo deadline > /sys/block/vda/queue/scheduler
The Norwegian Advantage: Latency and Law
Performance isn't just about CPU cycles; it's about the speed of light. If your target audience is in Oslo, Bergen, or Trondheim, hosting in a datacenter in Texas or even Frankfurt introduces unavoidable latency.
By peering directly at NIX (Norwegian Internet Exchange), CoolVDS ensures that your packets stay within the country. We are seeing ping times as low as 2-4ms from downtown Oslo to our racks. For high-frequency trading or interactive VoIP applications, that difference is tangible.
Data Integrity and Compliance
With the Data Protection Directive (95/46/EC) and Norway's strict Personopplysningsloven (Personal Data Act), knowing exactly where your data physically resides is becoming a boardroom issue. Unlike US-based "clouds" relying on Safe Harbor frameworks, our servers are physically located in Norway, subject to Norwegian law and the oversight of Datatilsynet.
The Hardware Reality: RAID-10 SAS
Virtualization software is only as fast as the rust it spins on. While consumer SSDs are starting to appear (like the Intel X25-M), they aren't ready for heavy write-intensive server workloads yet due to write amplification and cost.
That is why we stick to the gold standard: 15k RPM SAS drives in Hardware RAID-10. This gives you the redundancy of mirroring with the striping speed required for heavy I/O. Do not settle for a provider offering single SATA drives for production.
Conclusion
You can keep fighting with user_beancounters and praying your noisy neighbors don't launch a fork bomb, or you can graduate to true virtualization.
For development, a laptop is fine. For production, you need isolation, low latency network paths, and rock-solid storage I/O. At CoolVDS, we don't oversell, and we don't hide behind complex Terms of Service.
Ready to ditch the lag? Deploy a CentOS 5 or Debian Lenny KVM instance on our RAID-10 arrays today.