Console Login

Edge Computing in 2017: Reducing Latency and Ensuring Data Sovereignty in Norway

The Latency Lie: Why "The Cloud" Isn't Enough

We have spent the last five years migrating everything to centralized clouds—AWS in Frankfurt, Google in Belgium, Azure in Dublin. For 90% of web traffic, that remains a valid strategy. But for the remaining 10%—real-time IoT data processing, high-frequency trading algorithms, and latency-intolerant APIs—physics is becoming a problem.

Light in fiber optics travels slower than in a vacuum. A round trip from Oslo to Frankfurt usually sits between 25ms and 35ms. Add the overhead of SSL handshakes, database queries, and poorly optimized application stacks, and you are looking at user-perceptible delays. If you are building for the Norwegian market, hosting in Germany implies a performance penalty from day one.

This is where "Edge Computing" stops being a buzzword and starts being an architectural necessity. In May 2017, "Edge" doesn't necessarily mean a Raspberry Pi on a telephone pole. It means moving compute power closer to the data source. It means a high-performance VPS in Oslo, sitting directly on the NIX (Norwegian Internet Exchange), acting as a gateway before data ever hits the central cloud.

Use Case 1: The IoT Data Aggregator

Norway is an industrial nation. From maritime shipping sensors to hydroelectric control systems, we generate massive amounts of time-series data. Sending raw MQTT streams from a sensor in Stavanger to a database in Virginia is bandwidth suicide. It is expensive and unreliable.

The pragmatic architecture for 2017 is the Edge Aggregator. You deploy a local instance to ingest, sanitize, and compress data before batch-uploading it to central storage.

For this workload, disk I/O is the bottleneck. Traditional spinning rust (HDD) VPS setups choke under heavy write loads from sensors. This is why we enforce NVMe storage on CoolVDS KVM instances; the IOPS difference prevents data loss during burst events.

Here is a battle-tested configuration for InfluxDB 1.2 (the current stable release) running on a local aggregator node. We need to tune the Write Ahead Log (WAL) to handle high-velocity writes without consuming all available RAM.

# /etc/influxdb/influxdb.conf

[data]
  # The directory where the TSM storage engine stores TSM files.
  dir = "/var/lib/influxdb/data"

  # The directory where the TSM storage engine stores WAL files.
  wal-dir = "/var/lib/influxdb/wal"

  # Optimization for NVMe storage on CoolVDS
  # Increase cache limit if you have >4GB RAM
  cache-max-memory-size = "1g"

  # Compact fuller shards faster to free up space
  compact-full-write-cold-duration = "2h"

[retention]
  # Essential for Edge nodes with limited disk space
  enabled = true
  check-interval = "30m"

By setting a strict retention policy locally, you keep only the last 48 hours of high-resolution data on the edge node, while the down-sampled averages are shipped to your central warehouse. This reduces bandwidth egress costs by orders of magnitude.

Use Case 2: GDPR Preparation and Data Sovereignty

The General Data Protection Regulation (GDPR) was adopted last year and enforcement begins in May 2018. That gives us exactly one year from now to get our houses in order. Many legal teams are already advising that sensitive Norwegian user data should ideally stay within Norwegian borders to simplify compliance, specifically regarding the "transfer to third countries" clauses.

Using a US-owned cloud provider adds legal complexity. Using a Norwegian-domiciled host like CoolVDS simplifies the chain of custody. But jurisdiction isn't enough; you need technical safeguards.

On an edge node handling personal identifiable information (PII), we must assume physical theft is impossible but network intrusion is probable. Encryption at rest is mandatory. While many rely on provider-level encryption, you should configure LUKS (Linux Unified Key Setup) inside your VM for true isolation.

Pro Tip: Never rely on default firewall rules. On a public-facing edge node, your iptables policy should be default DROP.

Below is a script to lock down a standard Ubuntu 16.04 LTS edge node, allowing only SSH (with keys), Web, and VPN traffic.

#!/bin/bash
# Flush existing rules
iptables -F

# Set default chain policies
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

# Allow loopback traffic
iptables -A INPUT -i lo -j ACCEPT

# Allow established connections
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# Allow SSH (Change 22 to your custom port)
iptables -A INPUT -p tcp --dport 22 -j ACCEPT

# Allow HTTP/HTTPS for Let's Encrypt and API
iptables -A INPUT -p tcp --dport 80 -j ACCEPT

# Allow VPN (OpenVPN standard port)
iptables -A INPUT -p udp --dport 1194 -j ACCEPT

# Log dropped packets (careful with disk space)
iptables -A INPUT -j LOG --log-prefix "IPTables-Dropped: "

# Save rules
/sbin/iptables-save > /etc/iptables/rules.v4

Use Case 3: HTTP/2 Termination & Static Caching

If your application serves users in Oslo, but your backend is in London, your Time To First Byte (TTFB) suffers. TCP handshakes over long distances are slow. By placing a reverse proxy in Oslo, you terminate the SSL/TLS connection closer to the user. The heavy lifting of the handshake happens with < 5ms latency.

With Nginx 1.10+ (available in standard repositories), we can utilize HTTP/2 to multiplex requests. This is significantly more efficient than HTTP/1.1, especially for mobile users on unstable 3G/4G networks in rural Norway.

Deploying Nginx as a reverse proxy on a CoolVDS instance is trivial, but the tuning makes the difference. Do not leave the buffers at default values.

server {
    listen 443 ssl http2;
    server_name edge-oslo.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Optimize TLS for speed
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # Proxy Buffering - Essential for decoupling slow clients from fast backends
    proxy_buffers 16 32k;
    proxy_buffer_size 64k;

    location / {
        proxy_pass http://backend_upstream;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Enable Keepalive to backend to reduce backend connection overhead
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

The Hardware Reality

Software optimization only goes so far. In 2017, the difference between a VPS running on shared SATA storage and one running on dedicated NVMe is night and day. We frequently see "steal time" (CPU cycles waited for the hypervisor) ruin the performance of edge applications on budget hosts.

At CoolVDS, we built our infrastructure on KVM (Kernel-based Virtual Machine). Unlike OpenVZ, KVM provides true hardware virtualization. If a neighbor on the host node spikes their CPU usage, your kernel scheduler remains unaffected. For edge computing—where predictability is as important as raw speed—this isolation is non-negotiable.

Whether you are preparing for GDPR compliance or trying to shave 30ms off your API response time for Norwegian customers, the topology of your network matters. Don't let your data travel across the continent just to come back home.

Ready to lower your latency? Deploy a KVM NVMe instance in our Oslo datacenter today and see the difference in your ping times.