Latency is the Enemy: Why "Edge Computing" in Norway Matters for Your 2016 Stack
Let’s cut through the marketing noise. Everyone is talking about "The Cloud," but for those of us staring at terminal screens at 3 AM, the cloud has a physical location. And if that location is Virginia while your users are in Oslo, you have a problem. Physics is stubborn; the speed of light isn't getting faster.
In 2016, mobile usage is overtaking desktop. Users on 3G and 4G networks are volatile. If your handshake takes 200ms because your server is across the Atlantic, you've lost them. "Edge Computing" isn't just about CDNs caching JPEGs anymore. It's about moving compute logic to the geographic edge. For the Nordic market, that means hosting in Norway.
The "Safe Harbor" Panic and Data Sovereignty
Before we touch `sysctl.conf`, we need to address the elephant in the server room. The European Court of Justice struck down the Safe Harbor agreement last October. If you are storing Norwegian user data on servers owned by US giants without strict legal frameworks, you are walking on thin ice. The Norwegian Data Protection Authority (Datatilsynet) is watching.
Hosting locally isn't just a performance tweak anymore; it's a compliance strategy. By keeping your database on a VPS in Oslo, you circumvent the headache of cross-border data transfer legality. This is the ultimate "Edge" use case: legal safety.
Use Case 1: IoT Aggregation with MQTT
I recently worked on a project involving sensors on fishing vessels in the North Sea. Bandwidth is expensive and satellite links have massive latency. Sending raw data to a centralized cloud in Frankfurt failed miserably. The connection drops were too frequent.
The solution? An "Edge" node running on a high-performance VPS in Oslo to act as an aggregation point. We used Mosquitto (an MQTT broker) to handle the lightweight pub/sub messaging.
Here is how we tuned the `mosquitto.conf` to handle thousands of unstable connections without eating all our RAM:
# /etc/mosquitto/mosquitto.conf
# Use epoll for better performance on Linux
listener 1883
max_connections -1
# Persistence is key if the link drops
persistence true
persistence_location /var/lib/mosquitto/
# Log efficiently - disk I/O can kill you
log_dest file /var/log/mosquitto/mosquitto.log
log_type error
log_type warning
# notice and debug are too noisy for production
By terminating the MQTT connection in Oslo (CoolVDS), we reduced the round-trip time (RTT) significantly compared to the central API in Germany. The local node buffers the data and batch-uploads it via a reliable REST API when the link is stable.
Use Case 2: TCP Optimization for High Latency Clients
Default Linux kernel settings are often conservative, tuned for 100Mbps LANs, not gigabit WANs with mobile clients. If you are serving content from the edge, you need to tune the TCP stack. I've seen throughput double just by adjusting window scaling.
On a CoolVDS KVM instance (running CentOS 7 or Ubuntu 14.04), open `/etc/sysctl.conf` and apply these settings. This is standard procedure for any "Battle-Hardened" admin:
# /etc/sysctl.conf
# Increase TCP buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1
# Protection against SYN flood attacks (common on public facing edge nodes)
net.ipv4.tcp_syncookies = 1
# Reuse connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1
Apply with `sysctl -p`. These settings allow the TCP window to grow larger, filling the pipe even when latency is present.
The Tech Stack: Nginx and HTTP/2
2015 gave us the HTTP/2 RFC. In 2016, if you aren't using it, you are behind. HTTP/2 allows multiplexing multiple requests over a single TCP connection. This is a game-changer for mobile clients on high-latency networks.
However, HTTP/2 requires SSL/TLS. This adds a handshake overhead. This is exactly why your termination point needs to be close to the user. Doing a TLS handshake from Bergen to New York takes ~150ms+. From Bergen to Oslo (CoolVDS) takes ~15ms.
Here is a snippet for Nginx 1.9.x to enable HTTP/2 (remember, the flag changed from `spdy` to `http2` recently):
server {
listen 443 ssl http2;
server_name edge.example.no;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# Optimize SSL session caching to reduce handshake CPU usage
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Use strong ciphers (Forward Secrecy is a must in 2016)
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:kEDH+AESGCM';
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://backend_upstream;
}
}
Pro Tip: Always use `ssl_session_cache`. It drastically reduces the CPU load on your edge server by allowing clients to reuse SSL parameters for subsequent connections.
Why Bare Metal Performance Matters in Virtualization
Not all VPS providers are created equal. In the container vs. VM war, containers (Docker) are winning for deployment ease, but they share the kernel. For true isolation and performance stability, I still prefer KVM.
When you are running a database or a high-traffic message broker at the edge, you cannot afford "noisy neighbors" stealing your I/O cycles. This is why CoolVDS uses KVM virtualization on top of NVMe storage.
Performance Comparison: HDD vs SSD vs NVMe
| Storage Type | IOPS (Random Read) | Latency | Verdict |
|---|---|---|---|
| Traditional HDD (7.2k) | ~100 | High | Obsolete for DBs |
| Standard SSD (SATA) | ~5,000 - 10,000 | Low | Good Standard |
| NVMe (CoolVDS) | ~200,000+ | Ultra-Low | Edge Ready |
If you are processing data at the edge, disk I/O is usually your bottleneck. NVMe eliminates that.
The Logical Conclusion
Edge computing in 2016 isn't science fiction. It's simply the practice of respecting your user's time and your legal team's sanity. Whether you are aggregating MQTT data from oil rigs or serving fast e-commerce content to Oslo, the physical location of your server dictates your baseline latency.
You can spend weeks optimizing your code, but you can't optimize the distance across the Atlantic. Place your workload where it belongs: on a high-performance, local infrastructure.
Ready to lower your RTT? Spin up a KVM instance on CoolVDS today and ping 127.0.0.1 from the true edge of the Nordics.