Stop Letting 'Shared' Resources Kill Your Uptime
If I see one more hosting provider selling "burstable RAM" as a feature, I'm going to pull the CAT5 cables out of their racks myself. In the last three months, I've migrated four different clients away from cheap OpenVZ containers. Why? Because when your neighbor decides to compile a kernel or run a poorly optimized PHP script, your database latency spikes through the roof.
In 2009, if you are serious about hosting—whether it's a high-traffic Magento store or a critical Subversion repository—you need deterministic performance. You need Xen.
This isn't just about switching providers; it's about understanding the architecture of isolation. Let's look at how to configure Xen for raw performance and why we at CoolVDS built our entire Norwegian infrastructure on it.
The Architecture: Dom0 vs. DomU
Unlike container-based solutions where everyone shares the same kernel (and the same kernel panics), Xen uses a hypervisor layer. The Dom0 (privileged domain) manages the hardware, while your VPS lives in a DomU (unprivileged domain).
This matters because of the Data Inspectorate (Datatilsynet). When you are hosting data in Norway, you want to ensure strict logical separation. If a kernel exploit hits a neighbor on an OpenVZ node, you might be exposed. On Xen, that exploit is trapped inside their virtual machine.
Paravirtualization (PV) is the Speed King
We currently have two choices: Full Virtualization (HVM) or Paravirtualization (PV). Unless you are stuck running Windows Server 2008, you should be using PV.
In PV mode, the guest OS (your CentOS 5 or Debian Lenny system) "knows" it is virtualized. It makes efficient hypercalls directly to the hardware rather than emulating a physical network card or disk controller. This creates I/O performance that is nearly indistinguishable from bare metal.
Pro Tip: Always check your kernel. If you aren't running a Xen-aware kernel, you are stuck in HVM mode with the overhead of QEMU device emulation. Run uname -r and look for the 'xen' tag.
Configuration: The Anatomy of a Stable Node
Forget the GUI control panels for a second. If you want to understand your server, look at the config file. A standard CoolVDS Xen configuration block located in /etc/xen/ looks like this:
name = "coolvds_node_01"
memory = 1024
vcpus = 2
# The kernel matters. PV allows direct hypercalls.
kernel = "/boot/vmlinuz-2.6.18-128.el5xen"
ramdisk = "/boot/initrd-2.6.18-128.el5xen.img"
root = "/dev/xvda1 ro"
disk = [ "phy:/dev/VolGroup00/LogVol00,xvda,w" ]
vif = [ "mac=00:16:3e:4a:22:01, bridge=xenbr0" ]
# On Crash behavior is critical for HA setups
on_crash = "restart"
Notice the phy: directive in the disk line. At CoolVDS, we map LVM volumes directly to your instance. We don't use file-backed loop devices (like `disk.img`) which slow down disk writes significantly. This direct mapping is crucial for database throughput.
The Storage Bottleneck: Why RAID-10 SAS Matters
CPU cycles are cheap. Disk I/O is expensive. This is the golden rule of 2009.
Most budget VPS providers stick you on a SATA drive with 7,200 RPM. When you have high concurrent writes (like logging hits on a busy Apache server), the read/write heads on those physical disks physically cannot keep up. The result? IOwait spikes.
We solve this with 15k RPM SAS drives in RAID-10. While Solid State Drives (SSDs) like the Intel X25-E are just starting to enter the enterprise market, they are still prohibitively expensive for mass storage. High-speed SAS in RAID-10 is currently the only reliable way to get the low latency required for professional applications without bankrupting your IT budget.
| Feature | OpenVZ / Virtuozzo | Xen (CoolVDS) |
|---|---|---|
| Kernel | Shared (One panic kills all) | Isolated (Your own kernel) |
| Memory | Burstable (Oversold) | Dedicated (Hard limit) |
| Swap | Fake / Unavailable | Real Dedicated Swap Partition |
| I/O Performance | Contended | Priority Scheduled |
Local Latency: The Oslo Advantage
If your target audience is in Norway, hosting in Germany or the US is a mistake. The speed of light is a hard limit.
By placing our racks directly connected to the NIX (Norwegian Internet Exchange) in Oslo, we drop latency from ~40ms (typical Europe roundtrip) to under 5ms for local users. For TCP-heavy protocols or AJAX-rich web applications, that difference makes your site feel instant.
Furthermore, adhering to the Personopplysningsloven (Personal Data Act) is much simpler when your data physically resides within Norwegian borders, satisfying local compliance requirements for handling sensitive user data.
Stop Fighting the Hypervisor
You have enough to worry about with cross-browser compatibility in IE6 and keeping your MySQL 5.1 replication in sync. You shouldn't have to worry if your neighbor is stealing your CPU cycles.
Xen provides the strict isolation of a dedicated server with the flexibility of virtualization. It’s what powers the biggest clouds in the world right now (like Amazon EC2), and it’s what should power your infrastructure.
Don't let slow I/O kill your project. Deploy a Xen PV instance with dedicated RAM and RAID-10 storage on CoolVDS today. We don't oversell, and we don't play games with your uptime.