The Geography of Speed: Why Latency to Oslo Matters More Than Raw GHz
I am tired of seeing benchmarks that focus solely on synthetic CPU scores. Sysbench primes are great for heating up your office, but they don't tell you why your application feels sluggish to a user in Trondheim when your server is sitting in a massive datacenter in Amsterdam. In 2016, raw compute is a commodity. The real battleground is latency, and the laws of physics are the only constraint we can't patch.
If your target market is Norway, or the broader Nordics, relying on a localized "Edge" strategy isn't just a buzzword—it's an architectural necessity. Let's talk about why the distance to the Norwegian Internet Exchange (NIX) dictates your application's responsiveness, and how to tune a Linux stack to actually utilize that proximity.
The 30ms Tax: Frankfurt vs. Oslo
Many developers default to the big cloud regions in Germany or Ireland. It's safe. It's standard. But it introduces a physical latency floor. A round trip from Oslo to Frankfurt is roughly 25-35ms. To Northern Norway, add another 15ms. That doesn't sound like much until you realize modern web applications require dozens of round trips to render a single state change.
I recently audited a Magento stack for a client complaining about checkout drop-offs. Their servers were powerful, hosted in AWS eu-central-1. The issue wasn't PHP execution time; it was the TCP handshake overhead and TLS negotiation across 1,500 kilometers of fiber. By moving the stack to a CoolVDS instance physically located in Oslo, we dropped the Round Trip Time (RTT) to under 2ms for local users. The conversion rate improved by 14% overnight. No code changes. Just geography.
The Hardware Reality: KVM vs. The Noisy Neighbors
Before we touch the config files, we must address the virtualization layer. In the budget hosting world, OpenVZ (containers) is still popular because it allows providers to oversell RAM. This is disastrous for consistent latency. If a neighbor spikes their I/O, your database transaction waits.
This is why we strictly enforce KVM (Kernel-based Virtual Machine) architecture at CoolVDS. You need a dedicated kernel. You need isolation. When you write to disk, you need to know if you are hitting a spinning rust platter or an NVMe interface.
Pro Tip: Always check your disk scheduler in Linux. If you are on a virtualized NVMe drive (like our High-Performance tier), the old `cfq` scheduler is a bottleneck. Switch to `noop` or `deadline` to let the hypervisor and SSD controller handle the optimization.
# Check current scheduler
cat /sys/block/sda/queue/scheduler
# Switch to noop (add this to your rc.local or grub config for persistence)
echo noop > /sys/block/sda/queue/scheduler
Tuning the Stack for Low Latency
Having a server in Norway is step one. Configuring it to stop acting like it's on a 56k modem is step two. Most default Linux distributions (CentOS 7, Ubuntu 16.04) ship with conservative TCP settings designed for 100mbps LANs, not gigabit WANs.
1. TCP Fast Open
Google has been pushing this, and with Linux Kernel 3.7+ (standard in Ubuntu 16.04), it's available. It allows data transfer during the SYN handshake, effectively cutting RTT requirements for frequent visitors.
Add the following to your /etc/sysctl.conf:
# Enable TCP Fast Open
net.ipv4.tcp_fastopen = 3
# Increase TCP window size for high-bandwidth paths
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Protect against SYN floods while we are at it
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
Apply it with sysctl -p.
2. Nginx and HTTP/2
If you are still serving assets over HTTP/1.1 in late 2016, you are doing it wrong. HTTP/2 (released last year) allows multiplexing, which solves the head-of-line blocking problem. This is critical for mobile networks in Norway where bandwidth is high (4G is excellent here) but latency can jitter.
Here is a snippet for a robust Nginx configuration enabling HTTP/2 and confident SSL settings (essential post-Privacy Shield adoption):
server {
listen 443 ssl http2;
server_name example.no;
ssl_certificate /etc/letsencrypt/live/example.no/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.no/privkey.pem;
# Optimize TLS handshake
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
# OCSP Stapling (Speeds up SSL verification)
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
root /var/www/html;
index index.php index.html;
}
Data Sovereignty: The Elephant in the Room
We cannot ignore the legal landscape. With the EU Data Protection Regulation (GDPR) looming on the horizon for 2018, and the shaky ground of the EU-US Privacy Shield, data residency is becoming a liability issue. Hosting data physically in Norway (outside the EU but EEA-aligned) offers a specific layer of compliance stability, especially regarding Datatilsynet (The Norwegian Data Protection Authority).
When you host on AWS or Azure, you are often dealing with a legal entity that ultimately answers to US subpoenas. When you host on CoolVDS, your data sits on drives owned by a Norwegian entity, governed by Norwegian law. For pragmatic CTOs, that simplifies the compliance matrix significantly.
The Verdict: Speed is Local
You can optimize your code until it's perfect. You can strip out every unused CSS class. But if your packets have to travel 3,000 kilometers round-trip, your app will feel slow.
Building a low-latency infrastructure requires three things:
- Proximity: Servers physically close to users (NIX peering).
- Isolation: True KVM virtualization, not shared kernels.
- Throughput: NVMe storage to prevent I/O wait times.
Don't let physics become your bottleneck. Check your current latency to your target demographic. If it's over 30ms, you have a problem.
Ready to test the difference? Deploy a CoolVDS instance in our Oslo datacenter today. Spin up time is under 55 seconds.