Latency is the Enemy: Architecting High-Performance Edge Nodes in Norway
Let’s be honest. If your users are in Oslo and your server is sitting in a datacenter in Frankfurt or Amsterdam, you are failing them. You might think 30ms is acceptable. It isn't. In the world of high-frequency trading or high-conversion ecommerce, 30ms is an eternity. It’s the difference between a conversion and a bounce.
I recently audited a media streaming platform targeting the Nordic market. Their infrastructure was solid—if you lived in London. But for a user in Tromsø on a 4G connection? The Time To First Byte (TTFB) was averaging 200ms before the handshake even settled. We moved their caching layer to Oslo, and the difference wasn't just noticeable; it was violent.
This is what we call "Edge Computing" today—moving the logic and the cache closer to the eyeball. Here is how you build a ruthless, low-latency edge node using the tools we have right now in 2014.
The Hardware Reality: Spindles vs. Silicon
You cannot build a high-performance edge node on rotational disks. Period. When you are serving thousands of small static files or churning through a massive Varnish cache, IOPS (Input/Output Operations Per Second) is your bottleneck, not CPU.
Most VPS providers oversell their storage. They put you on a shared SAN with 50 other neighbors, and when one of them runs a backup script, your database locks up. This is where CoolVDS differs. We utilize direct-attached PCIe SSD storage. The latency difference between standard SATA SSDs and PCIe interfaces is significant when you are hammering the disk with random reads.
Here is a quick way to test if your current provider is lying to you about "dedicated" performance. Run this on your instance:
# Check your disk I/O latency
ioping -c 10 .
--- .
4 KiB from . (ext4 /dev/vda1): request=1 time=0.21 ms
4 KiB from . (ext4 /dev/vda1): request=2 time=0.23 ms
4 KiB from . (ext4 /dev/vda1): request=3 time=0.19 ms
...
If you see anything over 1.0ms here, move your data. You are waiting on physical heads to move or a choked controller.
The Software Stack: Varnish 4.0 & Nginx 1.6
For a Norwegian edge node, we don't want the application server (Apache/PHP) doing the heavy lifting. We want Varnish 4.0 sitting right at the front door. Varnish 4.0 (released earlier this year) brought massive improvements in thread handling over 3.0.
1. Configuring Varnish for Edge Logic
The goal is to serve cached content from RAM instantly. If it misses, we fetch from the backend (which could be internal or remote), but we stream it to the client immediately.
Here is a battle-tested default.vcl snippet for handling cache purging and graceful background refreshes (stale-while-revalidate), which is essential for keeping the site fast even when the backend is generating new content:
vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
.first_byte_timeout = 60s;
}
sub vcl_recv {
# Only cache GET and HEAD requests
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Normalize headers to improve cache hit rate
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} else {
unset req.http.Accept-Encoding;
}
}
return (hash);
}
sub vcl_backend_response {
# Set grace period to 2 minutes
# This allows serving stale content while fetching fresh data
set beresp.grace = 2m;
}
2. The Nginx Terminator
Behind Varnish, we use Nginx 1.6.2. Why not Apache? Because Nginx handles the C10k problem (10,000 concurrent connections) with an event-driven architecture that barely touches the RAM. Apache preforks processes; Nginx uses workers. On a 1GB VPS, this is the difference between staying online during a DDoS and crashing.
Here is the critical nginx.conf tuning for high-concurrency edge nodes. Pay attention to the worker_rlimit_nofile and keepalive_timeout:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
# Essential for high load
worker_rlimit_nofile 65535;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Buffer tuning for TCP optimization
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# Upstream to PHP-FPM
upstream backend {
server unix:/var/run/php5-fpm.sock;
}
}
Kernel Tuning: The Sysctl Secret Sauce
Linux defaults are often conservative. Ubuntu 14.04 LTS is stable, but out of the box, it isn't tuned for an edge node handling thousands of TCP connections per second. You need to modify the network stack.
Pro Tip: Never apply sysctl settings blindly. Test them. But for a dedicated edge node, we need to reuse TIME_WAIT sockets faster to avoid port exhaustion.
Edit your /etc/sysctl.conf:
# Increase system-wide file descriptors
fs.file-max = 2097152
# Optimize TCP stack for low latency
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_keepalive_time = 300
net.ipv4.ip_local_port_range = 1024 65535
# Congestion control (CUBIC is standard in 3.x kernels, but ensure it's on)
net.ipv4.tcp_congestion_control = cubic
Load these with sysctl -p. If you are on a CoolVDS KVM instance, you have full kernel control to do this. If you are on an old OpenVZ container, you are likely stuck with whatever the host node gives you. Don't accept that limitation.
The Norwegian Context: NIX and Sovereignty
Why host in Norway specifically? It is not just about physics; it is about peering. The Norwegian Internet Exchange (NIX) in Oslo allows direct peering with major ISPs like Telenor and Altibox. If your server is in Germany, your traffic hops through multiple carriers. If it is in Oslo, it is often one hop away from the user.
Furthermore, we have the Personopplysningsloven (Personal Data Act). While we wait to see how EU regulations evolve, keeping data within Norwegian borders simplifies compliance significantly for local businesses. You avoid the headache of the Safe Harbor agreement validity debates currently happening in Brussels.
Conclusion
Building an edge node isn't magic. It is a combination of fast I/O, an event-driven web server, and geographical proximity. You can try to optimize a server in Virginia to serve customers in Bergen, or you can respect the laws of physics.
If you are ready to stop fighting latency and start dominating benchmarks, you need a platform that gives you raw root access, true KVM isolation, and local peering.
Don't let slow I/O kill your SEO. Deploy a test instance on CoolVDS in Oslo today and see the ping drop below 5ms.