Console Login

Edge Computing in 2019: Beating the Speed of Light to Oslo

Physics Does Not Negotiate: The Case for Norwegian Edge Nodes

We need to stop pretending that "The Cloud" is magic. It is just someone else's computer, and usually, that computer is sitting in a massive datacenter in Frankfurt, Amsterdam, or London. For 90% of web traffic, that's fine. But if you are building High Frequency Trading (HFT) algorithms, managing real-time IoT sensor arrays for the oil sector in Stavanger, or hosting 128-tick CS:GO servers, physics is your enemy.

Light in fiber optics travels roughly 30% slower than in a vacuum. A round trip from Oslo to Frankfurt usually costs you 25-35ms depending on routing efficiency. In the world of real-time data processing, 30ms is an eternity. This is where Edge Computing moves from a buzzword to a hard architectural requirement. In 2019, we aren't waiting for 5G to save us; we are building the infrastructure right now using bare-metal performance principles.

The "Frankfurt Penalty" and Local Peering

When your userbase is in Norway, routing traffic out of the country just to process a request is inefficient. The core of a solid edge strategy is proximity. By utilizing the Norwegian Internet Exchange (NIX), we keep traffic local. If your server is in a CoolVDS datacenter in Oslo, and your user is on Telenor fiber in Drammen, your latency can drop to sub-2ms. That is immediate response.

But proximity is useless if the TCP stack on your server is garbage. I often see developers spin up a default Ubuntu 18.04 instance and wonder why they can't handle high packet throughput. You need to tune the kernel.

Kernel Tuning for High-Throughput Edge Nodes

For an edge node ingesting thousands of small UDP packets (common in IoT and gaming) or handling rapid TCP handshakes, the defaults are too conservative. Open your /etc/sysctl.conf and look at these parameters.

# /etc/sysctl.conf - Optimized for Low Latency Edge Node (Nov 2019)

# Increase the maximum number of open file descriptors
fs.file-max = 2097152

# Maximize the backlog for incoming connections
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535

# Reuse sockets in TIME_WAIT state for new connections
# Note: tcp_tw_recycle is deprecated/removed in newer kernels, use reuse
net.ipv4.tcp_tw_reuse = 1

# Keepalive optimization for dead connections
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 6

# TCP Fast Open (TFO) to reduce handshake RTT
net.ipv4.tcp_fastopen = 3

# BBR Congestion Control (Available in Kernel 4.9+)
# This is crucial for unstable mobile networks (4G/LTE)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

Apply these changes with sysctl -p. The inclusion of BBR (Bottleneck Bandwidth and Round-trip propagation time) is critical here. Since edge nodes often talk to mobile devices on 4G networks where packet loss varies, BBR maintains higher throughput than the traditional CUBIC algorithm.

Pro Tip: Never blind-apply sysctl settings. Monitor your current values using sysctl -a | grep [value] before changing them. If you are running a database on the same node, ensure vm.swappiness is set to 1 or 0 to avoid disk swapping killing your latency.

Real-World Use Case: Industrial IoT Ingestion

Let's look at a concrete scenario we dealt with recently. A client needed to aggregate sensor data from a fleet of electric transport vehicles in Oslo. Sending raw data to AWS us-east-1 was causing timeouts and data loss due to jitter. We moved the ingestion point to a local CoolVDS instance.

The stack was simple but robust: Telegraf (collection) -> InfluxDB (storage) -> Grafana (visualization). The bottleneck is almost always disk I/O. InfluxDB writes heavily. If you use standard SATA SSDs or networked block storage (common in large public clouds), your write latency spikes under load.

We utilize NVMe storage directly attached to the hypervisor bus. This reduces I/O wait times drastically. Here is how we configured the InfluxDB retention policy to keep the edge node light, offloading historical data later during off-peak hours:

-- InfluxDB CLI
-- Create a retention policy for high-precision "hot" data (7 days)
CREATE RETENTION POLICY "hot_storage" ON "sensor_data" DURATION 7d REPLICATION 1 DEFAULT;

-- Downsample data older than 7 days for long-term storage (if moving to central cloud)
SELECT mean("temperature") INTO "sensor_data_historical"."autogen"."downsampled_temp" FROM "sensor_data" WHERE time > now() - 7d GROUP BY time(1h)

By keeping only 7 days of high-resolution data on the edge, we keep the index size in memory small, ensuring query responses remain under 50ms.

The Buffer Bloat Problem

When acting as an edge proxy, you are the shield between the user and your heavy backend applications. Nginx is the standard here. However, standard Nginx configurations often buffer too much data to disk, increasing latency.

If you are serving static assets or API responses to Nordic users, turn off disk buffering for the proxy where possible and rely on memory.

# /etc/nginx/nginx.conf snippet

http {
    # ... basic settings ...

    # Optimize for file serving
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    # Micro-caching at the edge
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=edge_cache:10m max_size=1g inactive=60m use_temp_path=off;

    server {
        listen 80;
        server_name edge.norway.example.com;

        location /api/ {
            proxy_pass http://backend_upstream;
            
            # Don't buffer responses to disk, stream them immediately
            proxy_buffering off;
            
            # Or, if caching, use the memory zone defined above
            # proxy_cache edge_cache;
            # proxy_cache_valid 200 30s;
        }
    }
}

Setting proxy_buffering off; is aggressive. It means Nginx sends data to the client synchronously as it receives it from the backend. For slow clients (mobile), this ties up a worker process. But for fast, low-latency API calls between services in the same region, it removes the buffering overhead entirely.

Why "Managed" Often Means "Slow"

Many providers offer "Managed Kubernetes" or "Container-as-a-Service". While convenient, these layers add abstraction. In 2019, container networking (CNI) overlays like Flannel or Calico still introduce a small CPU penalty for encapsulation/decapsulation.

For pure performance, we still prefer KVM virtualization without the container overlay overhead for the edge ingress nodes. CoolVDS provides root access to KVM instances. You control the OS, the kernel, and the network stack. You aren't sharing a kernel with 500 other containers.

Privacy and The Local Advantage

We cannot ignore the legal landscape. With GDPR in full effect and the Datatilsynet (Norwegian Data Protection Authority) keeping a close watch, data residency is not just about speed; it is about compliance. Storing temporary edge data on servers physically located in Oslo simplifies your compliance mapping significantly compared to shunting everything to a US-owned cloud provider's European region.

Edge computing isn't about replacing the central cloud; it's about offloading the immediate, latency-sensitive work to where the users actually are. If your users are in Norway, your servers should be too.

Don't let latency kill your application's user experience. Deploy a high-performance NVMe KVM instance on CoolVDS today and test your ping times from NIX.