The "Guaranteed RAM" Lie
If I see one more hosting provider claim they offer "burst RAM" as a feature, I'm going to scream. It’s 2009, folks. We are building applications that demand consistent memory and predictable I/O, not a lottery ticket where you hope your neighbor isn't compiling a Gentoo kernel when you need to serve traffic.
Here is the reality of the Nordic hosting market right now: Most budget VPS providers are stuffing 50, maybe 60 containers (OpenVZ/Virtuozzo) onto a single chassis. They oversell RAM because they bank on the fact that you won't use it. But when your MySQL InnoDB buffer pool tries to claim that memory and the host node is swapping? Your site dies. Instantly.
This is why at CoolVDS, we exclusively deploy Xen Paravirtualization (PV). It’s not about marketing; it’s about physics and kernel isolation.
Xen PV vs. The Rest: A Technical Autopsy
Unlike container-based solutions where you share a kernel with every other customer on the box, Xen functions as a Type-1 hypervisor. The hardware runs a thin layer called the hypervisor, and your VPS (DomU) talks directly to that. You get your own kernel. You get your own swap partition. You get real isolation.
In a recent migration for a client running a high-traffic forum in Oslo, we saw their load average spike to 20.0 on a competitor's "Enterprise" container plan. The CPU wasn't the bottleneck—it was I/O wait. The host node's disk queue was saturated by another user.
We moved them to a Xen PV slice on CoolVDS running CentOS 5.3. The result? Load average dropped to 0.4. Same specs on paper, completely different reality.
Optimizing Your Xen DomU for Performance
Just having Xen isn't enough; you need to tune it. If you are running a standard RHEL or CentOS 5 stack, you need to be aware of how the Linux 2.6.18 kernel handles virtual memory inside a hypervisor.
1. Stop Swapping Unnecessarily
By default, the kernel might swap out application memory too aggressively. In your /etc/sysctl.conf, check your swappiness:
# Check current value
cat /proc/sys/vm/swappiness
# Set it lower for dedicated database nodes
vm.swappiness = 102. Disk I/O Schedulers
Inside a Xen guest, the hardware abstraction is handled by the hypervisor. The cfq (Complete Fair Queuing) scheduler is often overkill and adds latency. Switch your disk scheduler to noop or deadline for better throughput on virtual block devices.
# Add this to your kernel line in /boot/grub/menu.lst
elevator=noopPro Tip: If you are running MySQL 5.0 or 5.1, ensure you are using theinnodb_file_per_tabledirective inmy.cnf. On a shared I/O subsystem, huge single table files can become fragmentation nightmares. Split them up.
The Hardware Truth: SAS RAID-10 is King
We don't hide our infrastructure. While consumer SSDs are starting to make waves in the enthusiast market, they aren't reliable enough for enterprise write cycles yet. That is why we stick to the gold standard: 15,000 RPM SAS drives in Hardware RAID 10.
RAID 10 gives you the striping speed of RAID 0 with the mirroring redundancy of RAID 1. It is expensive to build, which is why budget hosts use RAID 5 or SATA drives. But when you are pushing 500 queries per second, that write penalty on RAID 5 will kill your latency.
Data Sovereignty and The "Datatilsynet" Factor
Latency isn't just about milliseconds; it's about legality. If you are a Norwegian business, hosting your customer data in the US (even via Safe Harbor) introduces complexity. By keeping your data in Oslo, you aren't just getting 2ms pings to the NIX (Norwegian Internet Exchange); you are adhering strictly to the Personopplysningsloven (Personal Data Act).
Your data stays on Norwegian soil, protected by Norwegian power grid redundancy and local laws.
Final Verdict
You can save 50 NOK a month by choosing a container on an overloaded server in Germany, or you can secure your uptime with dedicated Xen resources right here in Norway. In the world of systems administration, you get exactly what you pay for.
Don't let I/O wait kill your reputation. Deploy a Xen PV instance on CoolVDS today and experience the stability of true isolation.