The Speed of Light is Too Slow: Why You Need to Move to the Edge
Let’s be honest: if your server is sitting in a datacenter in Ashburn, Virginia, and your users are sitting in Oslo, you have already failed. Physics is a harsh mistress. The round-trip time (RTT) for a packet to cross the Atlantic is roughly 100-120 milliseconds. Add in SSL handshakes, TCP slow start, and server processing time, and your users are staring at a white screen for half a second before a single byte of content renders.
In the world of high-performance systems, 500ms is an eternity. Amazon found that every 100ms of latency costs them 1% in sales. Google found that an extra 0.5 seconds in search page generation dropped traffic by 20%. The solution isn't just "more RAM"; it's geography. We need to push the logic and the cache to the edge of the network.
For those of us targeting the Nordic market, "The Edge" isn't some abstract cloud concept. It means having metal running on the Norwegian Internet Exchange (NIX). It means keeping data within the jurisdiction of the Datatilsynet (Data Inspectorate) to comply with the Personal Data Act. And it means using a stack that can handle thousands of requests per second without choking on I/O.
The Architecture: Nginx + Varnish + SSD
Traditional monolithic hosting is dead. The modern 2013 approach is distributed. You might have your heavy backend database (MySQL 5.5 or PostgreSQL 9.2) in a central secure location, but your front-end delivery nodes must be close to the user. This is where the combination of Nginx (as a terminator/load balancer) and Varnish Cache shines.
I recently worked on a project for a Norwegian media outlet preparing for high-traffic spikes. They were running Apache with `mod_php`. The server load skyrocketed every time a breaking news story hit. The disk I/O wait (iowait) on their spinning SAS drives was hitting 40%. The CPU was idle, waiting for the disk to spin. It was pathetic.
Step 1: The Reverse Proxy Layer
We replaced the frontend with Nginx 1.4.1. Nginx handles the "C10k problem" (10,000 concurrent connections) far better than Apache because it uses an event-driven, asynchronous architecture rather than a thread-per-request model. This is crucial for maintaining open connections on mobile networks where latency fluctuates.
Here is a snippet of the nginx.conf used to tune the worker processes for a 4-core VPS on CoolVDS:
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8096;
use epoll;
multi_accept on;
}
http {
# Optimize TCP stack
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Keepalive to reduce handshake overhead
keepalive_timeout 30;
keepalive_requests 100000;
# Buffer tuning to handle heavy payloads without disk thrashing
client_body_buffer_size 128k;
client_max_body_size 10m;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
output_buffers 1 32k;
postpone_output 1460;
}
Step 2: Varnish Configuration (The Magic)
Behind Nginx, we place Varnish 3.0. Varnish stores compiled HTML fragments in RAM. If you serve a request from RAM, it takes microseconds. If you serve it from disk (even SSD), it takes milliseconds. If you generate it via PHP, it takes tens of milliseconds.
The trick is to use Varnish Control Language (VCL) to intelligently strip cookies that prevent caching. Most sites set a `PHPSESSID` or Google Analytics cookie on every request, which effectively kills your cache hit rate. You must strip these on static assets.
sub vcl_recv {
# Allow the backend to serve the content if the cache is stale
# (Grace mode) - vital for high availability
set req.grace = 15s;
# Remove has_js and Google Analytics cookies
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", "");
set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
# If the cookie is empty, remove it entirely so Varnish caches the page
if (req.http.Cookie == "") {
remove req.http.Cookie;
}
}
sub vcl_fetch {
# Keep content for 1 hour in cache, but allow serving stale for 15s
# if the backend goes down or is slow
set beresp.ttl = 1h;
set beresp.grace = 15s;
}
Step 3: The Linux Kernel Tuning
Software configuration means nothing if your kernel is fighting you. On CentOS 6, the default TCP settings are conservative, designed for 100Mbit LANs, not gigabit WANs. We need to open up the TCP window.
Add these to /etc/sysctl.conf and run sysctl -p:
# Increase TCP max buffer size setable using setsockopt()
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Enable TCP Window Scaling
net.ipv4.tcp_window_scaling = 1
# Protect against SYN flood attacks
net.ipv4.tcp_syncookies = 1
The Hardware Reality: Why Spinning Rust is Dead
In 2013, deploying this stack on a VPS with standard HDD storage is like putting a Ferrari engine in a tractor. Varnish relies heavily on virtual memory. When the OS needs to swap pages or when logs are written, rotational latency on an HDD (often 5-10ms) causes the CPU to stall (Wait IO).
This is where CoolVDS differs from the budget providers. We don't oversell our storage. We use Enterprise SSDs in a RAID-10 configuration. The random I/O performance of an SSD is roughly 100x that of a 15k RPM SAS drive. When you are serving thousands of small files (images, CSS, JS) simultaneously, IOPS (Input/Output Operations Per Second) matters more than raw throughput.
Pro Tip: Always check the "steal" time in `top` when evaluating a VPS provider. If `%st` is consistently above 5%, your provider is overselling their CPU cores. On CoolVDS KVM instances, we guarantee dedicated resource allocation, so your steal time should stay at 0.0%.
Data Sovereignty and The Norwegian Advantage
Latency isn't the only reason to host locally. The legal landscape is shifting. With the EU Data Protection Directive (95/46/EC) and Norway's strict Personopplysningsloven, storing user data outside the EEA is becoming a compliance headache. By placing your edge nodes in our Oslo datacenter, you ensure that Norwegian customer data never physically leaves the country, simplifying your compliance with Datatilsynet audits.
Conclusion
Building an edge presence in 2013 doesn't require building your own datacenter. It requires smart software choices—Nginx and Varnish—and the right hardware foundation. You need low latency to NIX, high IOPS from SSD storage, and a kernel tuned for traffic.
Don't let your code wait on a spinning disk. Deploy a high-performance SSD VPS on CoolVDS today and see what sub-millisecond local latency does for your conversion rates.