The Myth of "Guaranteed" Resources in 2012
If you have spent any time managing servers for high-traffic eCommerce sites or latency-sensitive applications, you know the feeling. It's 2:00 AM on a Tuesday. Your monitoring system—maybe Nagios or Zabbix—starts screaming. Load average spikes to 20.0, yet your traffic is normal. You check top and see nothing consuming resources. What is happening?
You have likely fallen victim to the "noisy neighbor" effect, a classic symptom of container-based virtualization like OpenVZ when managed by budget hosting providers. In 2012, too many providers in Europe are still overselling RAM and CPU cycles, banking on the statistical probability that not all clients will peak simultaneously. When they do, your application hangs, and your database locks up.
At CoolVDS, we refuse to play that game. We architect our infrastructure on KVM (Kernel-based Virtual Machine) because, for a production environment, complete hardware isolation isn't a luxury—it's a requirement. Let's dissect why KVM beats containerization for mission-critical workloads and how to tune it for maximum throughput.
KVM vs. OpenVZ: The Kernel Difference
The fundamental difference lies in the kernel. In an OpenVZ environment, you are sharing the host's kernel. If a neighbor triggers a kernel panic or an exploit hits a vulnerability in that shared kernel, your instance goes down with the ship. Furthermore, resources like "Burst RAM" are often just marketing fluff for memory you can borrow but can't keep.
KVM, on the other hand, turns the Linux kernel into a hypervisor. Each guest has its own kernel, its own memory space, and, crucially, acts as an independent process on the host. This means you can run custom kernels, load specific modules, and use swap space exactly how you see fit.
Pro Tip: If you are moving from a shared environment to KVM, ensure you are using VirtIO drivers. Emulating an IDE controller or an Intel E1000 network card adds unnecessary overhead. VirtIO allows the guest OS to talk directly to the hypervisor.
Verifying VirtIO on CentOS 6
To ensure you are getting maximum I/O performance on your KVM slice, check that your disk and network drivers are virtualized correctly. Run this inside your VPS:
lsmod | grep virtio
You should see modules like virtio_net and virtio_blk loaded. If not, you are running in legacy emulation mode, effectively throwing away 30% of your disk throughput.
Optimizing I/O for SSDs: The Scheduler
One of the biggest shifts we are seeing in 2012 is the move from spinning 15k RPM SAS drives to Solid State Drives (SSDs). While SSD hosting is still a premium tier at many hosts, CoolVDS is pushing to make this the standard for high-performance tiers. However, Linux defaults are often tuned for spinning rust, not flash memory.
The default I/O scheduler in RHEL/CentOS 6 is usually cfq (Completely Fair Queuing), which optimizes for seek time—a concept that is irrelevant for SSDs. For a virtualized guest on SSD storage, you want to get out of the way. Switch your scheduler to deadline or noop.
Here is how to change it on the fly without a reboot:
# Check current scheduler
cat /sys/block/vda/queue/scheduler
# Output: [cfq] deadline noop
# Switch to noop (best for KVM guests on SSD)
echo noop > /sys/block/vda/queue/scheduler
To make this permanent, edit your /boot/grub/menu.lst and append elevator=noop to your kernel line.
Benchmarking: Seeing is Believing
Don't take a provider's word for it. When we provision a new node, we run standard tests to ensure the host node isn't oversubscribed. A simple dd test can give you a rough idea of write speeds, but for a real-world simulation, we prefer using iozone or simply compiling a large software stack.
However, for a quick sanity check on disk latency (the killer of databases), try this:
dd if=/dev/zero of=testfile bs=64k count=16k conv=fdatasync
On a standard HDD VPS, you might see 40-60 MB/s. On our CoolVDS SSD instances, we are consistently seeing speeds exceeding 300 MB/s. That difference is the line between a Magento checkout loading instantly or timing out.
The Norwegian Context: Latency and Law
Performance isn't just about disk speed; it's about network topology. If your primary customer base is in Norway, hosting in Germany or the UK adds unnecessary milliseconds. Packets obey the laws of physics.
| Route | Approximate Latency (Ping) |
|---|---|
| Oslo to London | ~25ms |
| Oslo to Amsterdam | ~18ms |
| Oslo to Oslo (NIX) | < 2ms |
Furthermore, we must consider the legal landscape. With the Norwegian Personal Data Act (Personopplysningsloven) and the European Data Protection Directive (95/46/EC), keeping data within national borders is becoming a significant compliance advantage for businesses handling sensitive customer records. Datatilsynet is becoming stricter about how data flows across borders. Hosting on a VPS physically located in Oslo simplifies this compliance headache immediately.
Configuring Nginx for High Concurrency
Finally, having a fast KVM VPS is useless if your web server is bottlenecked. Apache 2.2 with Prefork is stable, but for raw concurrency, Nginx is the superior choice in 2012. It uses an event-driven architecture rather than creating a new thread for every request.
Here is a snippet of a production nginx.conf tuned for a multi-core KVM instance:
user www-data;
worker_processes 4; # Match this to your KVM vCPU count
pid /var/run/nginx.pid;
events {
worker_connections 4096;
use epoll; # Critical for Linux 2.6+ kernels
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
# Gzip settings for bandwidth saving
gzip on;
gzip_disable "msie6";
include /etc/nginx/mime.types;
default_type application/octet-stream;
}
By explicitly setting use epoll;, you allow Nginx to handle thousands of connections efficiently, leveraging the Linux kernel's capabilities directly—something that works best when you have the dedicated kernel resources that KVM provides.
The Verdict
In the world of hosting, you generally get what you pay for. OpenVZ has its place for development boxes or low-priority personal sites. But if you are deploying a business application, a database server, or anything that requires consistent I/O performance, KVM is the industry standard for a reason.
At CoolVDS, we combine KVM virtualization with high-performance SSD storage and premium connectivity to the Norwegian Internet Exchange (NIX). We don't oversell, and we don't hide behind "burst" metrics.
Ready to see the difference dedicated resources make? Deploy a KVM instance in our Oslo datacenter today and stop fighting your neighbors for CPU.