Console Login

Latency is the Enemy: Why 'Edge' Means Hosting in Oslo, Not Amsterdam

The Speed of Light is Too Slow for Your Users

Let’s cut through the marketing fluff. You hear terms like "Cloud" and "Global Scalability" thrown around by sales teams who have never opened a terminal. They tell you it doesn't matter where your server lives as long as it has a CDN. They are lying.

If your primary customer base is in Norway, hosting your infrastructure in Frankfurt, Amsterdam, or—heaven forbid—Virginia, is a technical failure. Why? Because the speed of light is finite, and TCP handshakes are expensive.

In the context of the Nordic market, Edge Computing isn't about some futuristic IoT mesh; it's about putting your compute resources physically closer to the Norwegian Internet Exchange (NIX) in Oslo. Today, on March 25, 2014, we are going to look at why milliseconds cost you money and how to architect a low-latency stack using Nginx and Varnish on a local VPS.

The Math of a Request: Oslo vs. The World

Let's look at the raw numbers. I ran a standard `mtr` (My Traceroute) from a fiber connection in Oslo to a "budget" VPS provider in Amsterdam, and then to a CoolVDS instance right here in Oslo.

Target: Amsterdam Data Center

HOST: local-workstation             Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.1.1                  0.0%    10    0.8   0.9   0.8   1.2   0.1
  ... (isp hops) ...
  8.|-- xe-4-2-0.amster-pe.net       0.0%    10   34.2  34.5  34.1  35.8   0.5
  9.|-- target-vps-ams.net           0.0%    10   35.1  35.4  34.9  36.2   0.4

Target: CoolVDS (Oslo)

HOST: local-workstation             Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.1.1                  0.0%    10    0.8   0.9   0.8   1.2   0.1
  ... (nix peering) ...
  5.|-- nix.coolvds.net              0.0%    10    1.8   1.9   1.7   2.1   0.1
  6.|-- target-vps-oslo.net          0.0%    10    2.1   2.2   2.0   2.4   0.1

35ms vs 2ms. That doesn't sound like much until you realize a modern web page makes 50-100 HTTP requests. Even with keep-alive, the TCP Slow Start algorithm and initial handshakes (SYN, SYN-ACK, ACK) amplify this latency. If you are running an SSL handshake, you are adding round trips. That 33ms delta compounds into a visible 1-2 second delay on page load. In eCommerce, that is an abandoned cart.

Architecting the Edge Node

To leverage local presence, we don't just need a server nearby; we need a stack tuned for high concurrency. My go-to setup for 2014 is a Linux KVM VPS (CentOS 6.5 or Ubuntu 12.04 LTS) running Nginx as the terminator and Varnish 3.0 as the caching engine.

1. The Kernel Tuning

Before you even install a package, you need to prep the OS. Default Linux distributions are tuned for general-purpose desktop usage, not high-performance edge serving. We need to open up the ephemeral port range and tweak the backlog.

Edit /etc/sysctl.conf:

# Increase system-wide file descriptors
fs.file-max = 2097152

# Widen the port range for outgoing connections
net.ipv4.ip_local_port_range = 1024 65535

# Enable TCP Reuse to reduce TIME_WAIT sockets
net.ipv4.tcp_tw_reuse = 1

# Boost the backlog for high burst traffic
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# Adjust TCP buffer sizes for modern high-speed links
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

Apply with `sysctl -p`. If you skip this, your fancy Nginx setup will choke under load regardless of your CPU power.

2. Nginx: The Termination Layer

We use Nginx 1.4.x (Stable). Its job is to handle the client connection, terminate SSL (if you are one of the forward-thinkers moving to HTTPS), and pass requests to Varnish. Apache is simply too heavy for this specific layer due to its thread/process model.

Here is a snippet for `nginx.conf` that prioritizes raw throughput:

worker_processes auto;
events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

http {
    # Hide version to annoy script kiddies
    server_tokens off;
    
    # Zero copy is essential for static file serving
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    
    # Buffers
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 2 1k;
    
    # Timeouts (don't let slow clients tie up resources)
    client_body_timeout 12;
    client_header_timeout 12;
    keepalive_timeout 15;
    send_timeout 10;
    
    # Gzip is CPU intensive but bandwidth saving
    gzip on;
    gzip_comp_level 2;
    gzip_min_length 1000;
    gzip_types text/plain application/x-javascript text/xml text/css;
}
Pro Tip: On CoolVDS KVM instances, we expose the host CPU flags. This means if you compile Nginx from source, you can use the `--with-cc-opt='-O3'` flag to squeeze out extra cycles, though the repo versions are usually sufficient for 99% of use cases.

3. Varnish: The Accelerator

Varnish Cache 3.0 sits behind Nginx. It stores your content in RAM. RAM is orders of magnitude faster than even the SSDs we use at CoolVDS. The goal is to never hit the backend PHP/MySQL stack unless absolutely necessary.

A basic `default.vcl` to strip cookies from static files (essential, otherwise Varnish won't cache them):

backend default {
    .host = "127.0.0.1";
    .port = "8080"; # Apache or Nginx backend
}

sub vcl_recv {
    # Normalize Accept-Encoding to reduce cache variations
    if (req.http.Accept-Encoding) {
        if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|mp4|flv)$") {
            remove req.http.Accept-Encoding;
        } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
        } elsif (req.http.Accept-Encoding ~ "deflate") {
            set req.http.Accept-Encoding = "deflate";
        } else {
            remove req.http.Accept-Encoding;
        }
    }

    # Strip cookies for static files
    if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico)$") {
        unset req.http.cookie;
    }
}

Data Sovereignty and The "Datatilsynet" Factor

Beyond speed, there is a legal argument for keeping data in Norway. The Personal Data Act (Personopplysningsloven) is strict. While the EU Data Directive provides a framework, hosting data physically within Norwegian borders simplifies compliance for local businesses, specifically regarding banking and health data.

When you host on a generic US cloud provider, you are subject to the PATRIOT Act. When you host on CoolVDS in Oslo, your data resides under Norwegian jurisdiction. For a CTO, this is risk mitigation.

The Hardware Reality

Software tuning only goes so far. The underlying hardware of your VPS is the bottleneck. In 2014, many providers are still pushing "Enterprise SAS" (spinning disks) in RAID arrays. This is legacy tech.

To truly achieve "Edge" performance, you need high IOPS. We have standardized on SSD RAID-10 for our Norwegian clusters. The I/O wait times on spinning disks can cause your database to lock up during traffic spikes. SSDs eliminate that mechanical seek time latency.

Benchmarking Disk I/O:

If you want to test your current provider, run this (but be careful, it stresses the disk):

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

If you aren't seeing write speeds north of 300 MB/s on a VPS, you are running on outdated infrastructure.

Conclusion

Edge computing in 2014 isn't a buzzword—it's a geography lesson. If your customers are in Oslo, Bergen, or Trondheim, your server should be too. By combining local peering at NIX, a tuned Linux kernel, and the SSD performance of CoolVDS, you provide a user experience that international competitors physically cannot match.

Don't let latency kill your project. Deploy a high-performance SSD VPS in Oslo today with CoolVDS and ping `127.0.0.1` locally.