Console Login
Home / Blog / Server Administration / Stop Using OpenVZ for Production: Why KVM and SSDs Are the Only Real Choice in 2011
Server Administration 8 views

Stop Using OpenVZ for Production: Why KVM and SSDs Are the Only Real Choice in 2011

@

The "Burstable RAM" Lie: Why Serious Admins Are Migrating to KVM

Let’s be honest with ourselves. If you are running a high-traffic Magento store or a critical MySQL database on a budget VPS, you have probably lost sleep over "noisy neighbors." You know the feeling: your monitoring alerts trigger a load spike, but your access logs are quiet. Why? Because the guy on the same physical node is trying to compile a custom kernel or is getting DDoS'd, and the hypervisor—likely OpenVZ or Virtuozzo—is failing to isolate the I/O properly.

It is November 2011. The days of accepting oversold "burst RAM" as a feature are over. For production environments where latency translates directly to revenue, full virtualization via KVM (Kernel-based Virtual Machine) is no longer just an option; it is the baseline requirement.

The Architecture Gap: Shared Kernel vs. True Isolation

The problem with container-based legacy solutions like OpenVZ is the shared kernel. If the host node runs CentOS 5, you are effectively stuck with that kernel's limitations. If a vulnerability hits the host kernel, every container is potentially exposed. Furthermore, resource accounting is often fuzzy. "Guaranteed" CPU units can vanish when the host is under heavy load.

KVM changes the game by turning the Linux kernel itself into a hypervisor. Each guest has its own private memory, its own CPU registers, and crucially, its own kernel. You can run Debian 6 Squeeze, CentOS 6, or even FreeBSD, completely independent of the host node.

Sysadmin Pro-Tip: When provisioning a KVM instance, always verify that your provider is using virtio drivers. This allows the guest OS to talk directly to the hypervisor for network and disk I/O, bypassing the overhead of full hardware emulation. Without virtio, you're leaving 30% of your performance on the table.

Optimizing for the SSD Revolution

This year has seen a massive shift in storage economics. While spinning SAS disks are still standard in many data centers, the performance boost from Solid State Drives (SSD) is undeniable. However, putting an SSD behind a legacy hypervisor is like putting a Ferrari engine in a tractor. You need an I/O scheduler that understands flash memory.

On a KVM instance running Linux 2.6.32+, you should switch your I/O scheduler from the default cfq to noop or deadline. The standard scheduler assumes a spinning disk head, which leads to unnecessary latency on flash storage.

# Check current scheduler
cat /sys/block/vda/queue/scheduler

# Change to noop for SSD optimization (add to /etc/rc.local)
echo noop > /sys/block/vda/queue/scheduler

At CoolVDS, we don't just throw hardware at the problem; we tune the host nodes specifically for this high-throughput architecture.

Latency: The Norwegian Advantage

We often talk about hardware, but we ignore the speed of light. If your target audience is in Oslo, Bergen, or Trondheim, hosting your servers in Texas or even Frankfurt introduces unavoidable network latency. For a dynamic application, that 40ms round-trip time adds up with every database query and image request.

Hosting locally in Norway utilizing the NIX (Norwegian Internet Exchange) ensures your packets take the shortest possible path. Furthermore, adherence to the Personal Data Act (Personopplysningsloven) is far simpler when your data never leaves Norwegian soil. While Safe Harbor exists for US transfers, local residency provides a legal certainty that legal departments appreciate.

Comparison: Legacy VPS vs. CoolVDS KVM

FeatureLegacy OpenVZ/VirtuozzoCoolVDS KVM
KernelShared (Host dependent)Dedicated (Load your own modules)
Swap MemoryOften Fake/BurstableReal Dedicated Swap Partition
Disk I/OContendedIsolated with SSD RAID-10
SecurityShared Kernel VulnerabilitiesFull Hardware Virtualization

The CoolVDS Implementation

We built our platform for the "Performance Obsessive." We realized that offering cheap, oversold containers was a race to the bottom. Instead, we deployed KVM on nodes backed by enterprise-grade SSD RAID-10 arrays. This setup offers the raw I/O throughput required for demanding applications like high-traffic forums (vBulletin, IP.Board) or heavy MySQL workloads.

We also include standard ddos protection at the network edge, filtering out malicious traffic before it hits your eth0 interface—critical in an era where botnets are growing larger by the month.

If you are tired of wondering why your server slows down every day at 8:00 PM, it’s time to leave the noisy neighbors behind. Get the isolation of a dedicated server with the flexibility of a VPS.

Ready to compile your own kernel? Deploy a high-performance KVM instance in Oslo today.

/// TAGS

/// RELATED POSTS

Surviving the Spike: High-Performance E-commerce Hosting Architecture for 2012

Is your Magento store ready for the holiday rush? We break down the Nginx, Varnish, and SSD tuning s...

Read More →

Automate or Die: Bulletproof Remote Backups with Rsync on CentOS 6

RAID is not a backup. Don't let a typo destroy your database. Learn how to set up automated, increme...

Read More →

Nginx as a Reverse Proxy: Stop Letting Apache Kill Your Server Load

Is your LAMP stack choking on traffic? Learn how to deploy Nginx as a high-performance reverse proxy...

Read More →

Apache vs Lighttpd in 2012: Squeezing Performance from Your Norway VPS

Is Apache's memory bloat killing your server? We benchmark the industry standard against the lightwe...

Read More →

Stop Guessing: Precision Server Monitoring with Munin & Nagios on CentOS 6

Is your server going down at 3 AM? Stop reactive fire-fighting. We detail the exact Nagios and Munin...

Read More →

The Sysadmin’s Guide to Bulletproof Automated Backups (2012 Edition)

RAID 10 is not a backup strategy. In this guide, we cover scripting rsync, rotating MySQL dumps, and...

Read More →
← Back to All Posts