Edge Computing Architectures: Conquering Latency in the High North
Letâs be honest about the speed of light: it is annoyingly slow. If you are running real-time applications in Oslo, Bergen, or Trondheim, and your backend sits in a data center in Frankfurt or Amsterdam, you are fighting a losing battle against physics. You are looking at a round-trip time (RTT) of 25-40ms in best-case scenarios. Add jitter, packet loss at peering points, and the overhead of TLS handshakes, and your "real-time" app feels sluggish.
For high-frequency trading, industrial IoT (IIoT), or interactive gaming, that delay is unacceptable. I have seen projects fail not because the code was bad, but because the architect assumed the network was infinite and instant. It isn't.
In 2020, the conversation is shifting from "Cloud First" to "Edge Smart." For the Nordic market, this means processing data where it is generatedâright here in Norway. We are going to look at how to architect an edge node capable of handling high-throughput sensor data or request termination using standard Linux tools, Docker, and KVM virtualization.
The Use Case: Industrial IoT (IIoT) Aggregation
Consider a realistic scenario for the Norwegian market: Salmon farming. Automated feeding systems and environmental sensors in remote fjords generate terabytes of data. Sending raw MQTT streams to a cloud provider in Central Europe is expensive and risky. If the fiber gets cut (or a trawler drags an anchor over it), the local system goes blind.
The solution is an Edge Node running on high-speed infrastructure in Oslo to aggregate, filter, and alert before syncing summarized data to the central cloud. This keeps the data within Norwegian jurisdictionâsatisfying Datatilsynet and GDPR requirements regarding data sovereigntyâand reduces bandwidth costs.
The Stack: KVM, Docker, and MQTT
We avoid OpenVZ or LXC for the core host. Why? Because in a multi-tenant environment, you cannot afford "noisy neighbors" stealing your CPU cycles when a sensor burst comes in. We use KVM (Kernel-based Virtual Machine) backed by NVMe storage. This is the standard deployment model on CoolVDS, and frankly, anything less is negligence for production workloads.
Here is the architecture we will deploy:
- Ingest: Eclipse Mosquitto (MQTT Broker)
- Processing: Telegraf (Data collection agent)
- Storage (Hot): InfluxDB (Time series database)
- Transport: WireGuard (VPN for secure backhaul)
1. Tuning the Host Network
Before installing software, we must tune the Linux kernel. The default settings in CentOS 8 or Ubuntu 20.04 are conservative. We need to open up the TCP buffers to handle bursty traffic without dropping packets.
Add the following to /etc/sysctl.conf:
# Increase TCP buffer sizes for high-latency handling
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Enable TCP BBR congestion control (Available in Kernel 4.9+)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
# Increase the backlog for high connection rates
net.core.netdev_max_backlog = 5000
Apply these changes immediately:
sysctl -p
Pro Tip: TCP BBR (Bottleneck Bandwidth and Round-trip propagation time) is critical here. It models the network pipe rather than reacting to packet loss, which significantly improves throughput on unstable edge connections. Most providers don't enable this by default.
2. The Aggregation Layer
We will use Docker to containerize the stack. This allows for easy updates and isolation. However, disk I/O is usually the bottleneck for databases like InfluxDB. This is where the underlying hardware matters. Using spinning rust (HDD) or even standard SATA SSDs will choke during write-heavy bursts. NVMe storage is mandatory here to keep IO wait times near zero.
Here is a production-ready docker-compose.yml for the edge node:
version: '3'
services:
mosquitto:
image: eclipse-mosquitto:1.6
ports:
- "1883:1883"
- "8883:8883"
volumes:
- ./mosquitto/config:/mosquitto/config
- ./mosquitto/data:/mosquitto/data
restart: always
ulimits:
nofile:
soft: 65536
hard: 65536
influxdb:
image: influxdb:1.8
environment:
- INFLUXDB_DB=iot_data
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=ComplexPassword2020!
volumes:
- ./influxdb/data:/var/lib/influxdb
restart: always
telegraf:
image: telegraf:1.14
volumes:
- ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
links:
- influxdb
- mosquitto
restart: always
3. Configuring Telegraf for Efficient Batching
Telegraf is the glue. It reads from MQTT and writes to InfluxDB. To save network overhead and reduce disk I/O pressure (IOPS), we configure it to batch metrics.
Inside telegraf.conf:
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
[[inputs.mqtt_consumer]]
servers = ["tcp://mosquitto:1883"]
topics = [
"sensors/+/temperature",
"sensors/+/humidity"
]
data_format = "json"
[[outputs.influxdb]]
urls = ["http://influxdb:8086"]
database = "iot_data"
timeout = "5s"
Secure Backhaul with WireGuard
In 2020, IPSec is too heavy and OpenVPN is too slow for lean edge devices. With the release of Linux Kernel 5.6 earlier this year, WireGuard is now built directly into the kernel. It offers state-of-the-art cryptography and is significantly faster than the alternatives.
We use WireGuard to tunnel traffic from our CoolVDS node in Oslo back to the corporate HQ or central analytics cluster. This ensures that even if the traffic traverses the public internet, it remains encrypted.
Generating keys:
wg genkey | tee privatekey | wg pubkey > publickey
Configuration for /etc/wireguard/wg0.conf:
[Interface]
Address = 10.100.0.2/24
SaveConfig = true
PrivateKey =
ListenPort = 51820
[Peer]
PublicKey =
AllowedIPs = 10.100.0.0/24
Endpoint = hq.example.com:51820
PersistentKeepalive = 25
Bring up the interface:
wg-quick up wg0
Why Infrastructure Matters
You can have the most optimized code in the world, but if your underlying host is oversubscribed, your latency variance (jitter) will ruin the architecture. Many "budget" VPS providers pack hundreds of containers onto a single host. When one neighbor compiles a kernel, your database writes stall.
This is why we architect CoolVDS differently. We utilize KVM virtualization which provides a hardware-enforced isolation layer. Our storage backend is pure NVMe, providing the IOPS necessary to handle thousands of concurrent MQTT writes without the "iowait" spike seen on SSD or HDD setups. Furthermore, being located physically in Oslo and connected directly to the NIX (Norwegian Internet Exchange) ensures your latency to Norwegian end-users remains in the single digits.
Compliance and GDPR
Data residency is becoming a massive legal minefield. By processing and storing the initial data layer on a VPS in Norway, you simplify compliance. You aren't shipping raw PII (Personally Identifiable Information) across borders unnecessarily. You can sanitize the data locally on the CoolVDS instance before transmitting only anonymized aggregates to international clouds.
Next Steps
The edge is not coming; it is here. Whether you are managing fleet logistics or simply trying to serve a React app faster to customers in Tromsø, moving your compute closer to the user is the only way to beat the speed of light.
Don't let slow I/O or network hops kill your performance. Spin up a KVM-based, NVMe-powered instance on CoolVDS today and test the latency yourself.