The Lie of "Burst RAM" and The Truth About Isolation
If I see one more hosting provider marketing "Burst RAM" as a feature, I might just unplug my switch. It is 2012. We are building sophisticated applications with Magento, Drupal, and custom Python frameworks. We cannot afford to have our database latency spike to 500ms just because another user on the same physical node decided to compile a kernel or run a Minecraft server on a $5 OpenVZ slice.
I have spent the last decade in terminals, watching servers melt under load. The culprit is almost always soft isolation. That is why at CoolVDS, we don't play games with containers for our core infrastructure offerings. We use Xen.
Understanding the Hypervisor: Dom0 vs. DomU
To understand why your MySQL query is hanging, you have to understand the layer below your OS. In a Xen environment, there is a clear hierarchy that protects your resources.
- The Hypervisor: Runs directly on the metal. It’s tiny, secure, and its only job is to schedule CPU and memory.
- Dom0 (Domain 0): The privileged domain (usually Linux) that manages the others. It talks to the hardware drivers.
- DomU (User Domains): That’s you. Your VPS. Totally isolated.
Unlike container-based virtualization, where a kernel panic in the host can bring down the world, or where a shared kernel limits your ability to load specific modules (like IP tables or specific VPN tunnel drivers), Xen gives you a dedicated kernel.
Paravirtualization (PV) vs. HVM: What You Need to Know
When you provision a server, you usually face this choice. Here is the reality of the situation as it stands today.
Paravirtualization (Xen-PV)
In PV mode, the guest OS "knows" it is virtualized. It makes efficient hypercalls directly to the hypervisor rather than emulating hardware. This has historically been faster for Linux guests.
# Checking your Xen mode in CentOS 6
cat /sys/hypervisor/type
# Output: xen
Hardware Virtual Machine (Xen-HVM)
HVM uses the virtualization extensions in modern CPUs (Intel VT-x or AMD-V) to emulate a complete hardware environment. This allows you to run unmodified operating systems (like Windows or BSD).
Pro Tip: For raw Linux web server performance in 2012, a well-tuned PV guest often edges out HVM on disk I/O, though the gap is closing rapidly with PV-on-HVM drivers. If you are running a standard LAMP stack, stick to PV for now.
Configuration: Defining a Robust Guest
Let's look at how a proper Xen configuration file looks. We don't use GUIs for this; we use the config files in /etc/xen/. Here is a snippet from a production configuration designed for stability over "burst" capability.
# /etc/xen/web01.cfg
name = "web01"
# Dedicated memory. No ballooning tricks that steal RAM when you need it.
memory = 4096
maxmem = 4096
# CPU Pinning. Essential for high-load databases to prevent cache trashing.
vcpus = 2
cpus = "2,3"
# Network bridge to the physical interface
vif = [ 'bridge=xenbr0, mac=00:16:3e:xx:xx:xx' ]
# Storage using LVM backing for snapshot performance
disk = [ 'phy:/dev/volgroup/web01-disk,xvda,w' ]
bootloader = "/usr/bin/pygrub"
Notice the cpus directive? That acts as CPU affinity. It pins the virtual CPU to a physical core. Most budget providers won't do this for you. They let the scheduler bounce your VM across cores, destroying your L1/L2 cache locality. We pin high-priority instances at CoolVDS to ensure consistent latency.
The Storage Bottleneck: Why SSDs Are Non-Negotiable
You can have 16 cores and 32GB of RAM, but if you are running on a shared 7,200 RPM SATA drive, your site will crawl. The IOPS (Input/Output Operations Per Second) on a mechanical drive tops out around 100-150 random writes. A busy Magento store hits that in seconds.
This is why we have aggressively rolled out Enterprise SSD storage across our nodes. We are seeing IOPS figures in the thousands. Here is a simple dd test you can run to check your write speed (careful running this on a production DB server):
# Test write speed with a 1GB file
dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct
# On a standard SATA VPS, you might see 40-60 MB/s.
# On CoolVDS SSD nodes, we consistently see 250+ MB/s.
Optimizing Linux for Xen
Just booting the server isn't enough. You need to tune the guest OS. In CentOS 6, the default I/O scheduler is CFQ (Completely Fair Queuing). CFQ is great for spinning rust, but it's terrible for virtualization and SSDs. It tries to anticipate head movement that doesn't exist on flash storage.
Switch to noop or deadline to cut latency immediately.
# Check current scheduler
cat /sys/block/xvda/queue/scheduler
# [cfq] deadline noop
# Switch to noop on the fly
echo noop > /sys/block/xvda/queue/scheduler
# Make it permanent in /boot/grub/menu.lst
# append="elevator=noop ..."
Data Sovereignty and The Norwegian Advantage
Beyond the technical specs, we have to talk about where your data lives. With the Data Protection Directive (95/46/EC) and the strict enforcement by Datatilsynet here in Norway, relying on US-based hosting harbors risk. Safe Harbor exists, but for how long? And do you really want your sensitive customer logs sitting on a server in Virginia?
Hosting in Norway isn't just about compliance; it's about physics. If your users are in Oslo, Bergen, or Trondheim, latency matters.
| Route | Average Latency (ms) |
|---|---|
| Oslo to Amsterdam | ~18-22 ms |
| Oslo to London | ~25-30 ms |
| Oslo to CoolVDS (NIX connected) | < 2 ms |
That 20ms difference is an eternity when you are dealing with hundreds of HTTP requests per page load.
Final Thoughts
Virtualization is not a commodity; it is an architectural decision. You can choose the "cheap" route with oversold containers, or you can choose the architecture that respects your need for dedicated resources.
If you are tired of debugging performance issues that turn out to be your host's fault, it is time to move. Deploy a Xen PV instance on our new SSD-backed infrastructure today. Your top command will thank you.