Physics is Not Just a Suggestion
Let’s talk about the speed of light. In a vacuum, it's fast. In fiber optic cables running under the North Sea, zig-zagging through repeaters and switches between Amsterdam and Oslo, it's disappointingly slow. If you are serving Norwegian users from a data center in Germany or the UK, you are starting the race with a lead weight tied to your ankles.
I recently audited a high-traffic media portal targeting the Nordic demographic. They were hosting on a massive budget cloud provider in Ireland. Their "Time to First Byte" (TTFB) averaged 140ms. By moving the frontend logic to a CoolVDS instance in Oslo—peered directly at NIX (Norwegian Internet Exchange)—we dropped that to 18ms. That is not an optimization; that is a resurrection.
This is what we call the Edge. It's not about complex distributed computing yet; it's about common sense. Put the bits closer to the eyeballs.
The Stack: Varnish 4.0 & Nginx
To dominate latency, we don't just need proximity; we need a software stack that bypasses disk I/O whenever possible. In 2014, the undisputed king of caching is Varnish, specifically the recently released Varnish 4.0. We pair this with Nginx for SSL termination (since Varnish doesn't speak HTTPS) and static file serving.
1. The Varnish Configuration (VCL)
Varnish 4.0 introduced some syntax changes from 3.0. Here is a production-ready default.vcl for a high-traffic news site. This configuration separates the backend checks and defines grace periods to serve stale content if the backend (your database server) chokes.
vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
.probe = {
.url = "/health.php";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
sub vcl_recv {
# Purge logic for clearing cache
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
# Normalize Accept-Encoding to reduce cache fragmentation
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} else if (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
unset req.http.Accept-Encoding;
}
}
}
sub vcl_backend_response {
# Cache for 1 hour by default
set beresp.ttl = 1h;
# Allow stale content for 2 minutes if backend is sick
set beresp.grace = 2m;
}
2. Optimizing Nginx for SSD I/O
Since we are running on CoolVDS, we have access to high-speed SSD storage. Standard HDDs struggle with random read/write operations (IOPS), but SSDs eat them for breakfast. However, Nginx needs to be tuned to utilize this throughput without locking up worker processes.
In your nginx.conf, ensure you are using aio (Asynchronous I/O) and sendfile.
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Essential for high-load file serving on Linux
aio on;
directio 512;
# Buffer sizes for handling heavy POST requests
client_body_buffer_size 128k;
client_max_body_size 10m;
# Keepalive to reduce TCP handshake overhead
keepalive_timeout 65;
keepalive_requests 100000;
}
The "Local" Factor: Why Norway?
It is not just about speed. It is about sovereignty. With the implementation of the Norwegian Personal Data Act (Personopplysningsloven), keeping sensitive user data within national borders is becoming a compliance necessity, not just a preference. While Safe Harbor exists, relying on US-based servers is a legal minefield many CTOs prefer to avoid.
Pro Tip: Use mtr (My Traceroute) to verify your network path. If you see your packets routing through Stockholm to get to Oslo, your provider has bad peering. A proper Norwegian VPS should hit NIX directly.
Testing the Metal
Don't take marketing fluff for granted. Benchmark it. When we provision a CoolVDS KVM instance, we verify disk speed immediately using dd with the oflag=direct parameter to bypass the OS cache. This gives us the real speed of the underlying storage array.
# Write speed test (bypassing cache)
dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct
# Read speed test
dd if=testfile of=/dev/null bs=1G count=1 iflag=direct
If you aren't seeing write speeds north of 300 MB/s, you aren't on modern enterprise SSDs; you're likely on a noisy neighbor SATA array. For database-heavy applications, that I/O bottleneck is where your application dies.
Architecture Diagram: The "Norwegian Edge"
Here is the reference architecture we use for clients requiring < 20ms latency in Oslo:
- DNS: Geo-aware DNS (like Amazon Route53 or Dyn) points users in Norway to the CoolVDS IP.
- Ingress: Nginx listens on port 80/443. It handles SSL and serves static assets (images, CSS) directly from SSD.
- Cache: Dynamic requests are passed to Varnish on port 6081.
- Application: Cache misses hit PHP-FPM/Python/Ruby workers.
- Database: Percona MySQL Server (tuned for InnoDB) runs locally or on a private LAN VLAN.
Conclusion
The internet is getting heavier. Web pages in 2014 utilize more JavaScript, larger images, and more third-party trackers than ever before. You cannot control the user's connection quality, but you can control where the handshake happens. By placing your infrastructure in Oslo, you eliminate the variable of international routing.
Stop accepting 40ms latency as "good enough." It isn't.
Ready to drop your ping times? Deploy a pure-SSD KVM instance in our Oslo datacenter today.