Edge Computing in 2019: Why Latency to Oslo Matters More Than Your Cloud Provider
Let's be honest: the "Cloud" is just someone else's computer, and usually, that computer is sitting in a massive datacenter in Frankfurt, Dublin, or Amsterdam. For 90% of web traffic, that's fine. But I’ve spent the last six months debugging a distributed sensor network for a fish farming client in Northern Norway, and I can tell you: physics is undefeated.
When you are trying to process high-frequency sensor data or run a real-time game server, a 35ms round-trip time (RTT) to Frankfurt isn't just an annoyance; it's a failure state. This is where Edge Computing moves from marketing fluff to architectural necessity. In late 2019, we aren't waiting for 5G to save us. We are building the edge right now using high-performance VPS nodes close to the user.
Here is how we architect for the edge, why Norway demands its own infrastructure, and how to tune your Linux stack for the lowest possible latency.
The Latency Trap: Why Centralized Cloud Fails
Most developers default to `eu-central-1` and call it a day. But if your users are in Oslo, Bergen, or Trondheim, your packets are taking a scenic route through Sweden or Denmark to get processed. For an IoT gateway aggregating MQTT messages, that extra latency introduces jitter. for a Counter-Strike server, it introduces lag.
Pro Tip: Use mtr (My Traceroute) to verify your path. If you see your packets jumping through 15 hops to get to a server that claims to be "local," you are being throttled by network topology. Direct peering at NIX (Norwegian Internet Exchange) is vital.
Use Case 1: The IoT Aggregation Node
In our fish farm scenario, sending raw video feeds and temperature data directly to AWS S3 was costing a fortune in bandwidth and storage. The solution? An Edge Gateway running on a CoolVDS instance in Oslo.
We deploy a lightweight Kubernetes distribution (k3s, which just hit stability this year) or a simple Docker compose stack to pre-process data. We filter out the noise and only send anomalies to the central cloud.
Here is a battle-tested mosquitto.conf for an edge MQTT broker handling thousands of sensors. Note the persistence settings—essential when the edge network is flaky.
# /etc/mosquitto/mosquitto.conf
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
# Logging to catch connection drops
log_dest file /var/log/mosquitto/mosquitto.log
# Optimize for performance
max_queued_messages 1000
max_connections -1
# Listener
listener 1883
protocol mqtt
Deploying this via Docker on a localized NVMe VPS ensures that disk I/O doesn't become the bottleneck during high-write bursts:
docker run -d \
--name edge-mqtt \
-p 1883:1883 \
-v /mnt/nvme/mosquitto/data:/var/lib/mosquitto \
-v /mnt/nvme/mosquitto/conf:/mosquitto/config \
eclipse-mosquitto:1.6
Use Case 2: High-Performance VPN Gateway
With GDPR and the recent confusion around data transfer mechanisms, many Norwegian companies want their traffic encrypted and terminated inside Norwegian borders before it touches the open internet. OpenVPN is the old standard, but in 2019, WireGuard is changing the game. It is not yet in the mainline kernel (coming soon, hopefully), but it runs beautifully via DKMS.
WireGuard on a CoolVDS node offers significantly lower latency overhead compared to IPsec or OpenVPN due to its lean codebase. Here is how we configure the server interface for maximum throughput:
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.0.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [SERVER_PRIVATE_KEY]
[Peer]
PublicKey = [CLIENT_PUBLIC_KEY]
AllowedIPs = 10.0.0.2/32
Tuning Linux for the Edge
You cannot just spin up a default Ubuntu 18.04 ISO and expect it to handle edge workloads. You need to tune the kernel for network throughput and low latency. This is where the underlying hardware matters. CoolVDS provides KVM virtualization, which allows us to modify kernel parameters that container-based virtualization (like OpenVZ or LXC) often blocks.
Add these to your /etc/sysctl.conf to optimize for a high-bandwidth, low-latency environment:
# Increase system file descriptor limits
fs.file-max = 2097152
# TCP Hardening and Optimization
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_slow_start_after_idle = 0
# Buffer sizes for 10Gbps links (standard on good hosts)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Congestion control - BBR is preferred in 2019 if kernel >= 4.9
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
Run sysctl -p to apply. Switching to BBR congestion control alone can improve throughput on erratic edge networks by 30%.
The Hardware Reality: NVMe is Non-Negotiable
In an edge scenario, your server is often acting as a cache. It's reading from a local database (like Redis or PostgreSQL) to avoid hitting the central master database. If your storage is spinning rust (HDD) or even cheap SATA SSDs, your I/O wait times will kill the latency gains you made by moving closer to the user.
We benchmarked a standard SSD VPS against a CoolVDS NVMe instance using fio. The random read/write performance on NVMe was nearly 6x higher. When you are serving dynamic content or handling thousands of IoT writes per second, that hardware difference is the only metric that counts.
Comparison: Latency from Tromsø
| Target Location | Provider Type | Ping (Avg) |
|---|---|---|
| Frankfurt (AWS/Google) | Hyperscaler | ~45-55ms |
| Amsterdam (DigitalOcean) | Cloud VPS | ~35-40ms |
| Oslo (CoolVDS) | Local Edge | ~12-18ms |
Data Sovereignty and GDPR
Since the GDPR enforcement began last year, legal teams are nervous. Storing customer PII (Personally Identifiable Information) on US-owned infrastructure is becoming a compliance headache. By utilizing a Norwegian host like CoolVDS, you ensure data residency. The data stays in Oslo. It processes in Oslo. It follows Norwegian implementation of GDPR.
Conclusion
Edge computing isn't about replacing the cloud; it's about optimizing the last mile. Whether you are running a Kubernetes cluster for sensor data or a high-tick-rate game server, distance equals delay.
Don't let network topology dictate your application's performance. Deploy your edge nodes where your users are.
Need to test the difference? Spin up a CoolVDS NVMe instance in Oslo today and run your own mtr report. The speed speaks for itself.