Console Login

Edge Computing & IoT in 2018: Overcoming Latency with Norwegian VPS Infrastructure

Edge Computing in 2018: Moving Beyond the Centralized Cloud Hype

Physics is stubborn. It does not care about your Service Level Agreement (SLA), and it certainly doesn't care about your cloud provider's marketing claims. If you are running mission-critical IoT infrastructure in Trondheim but processing your logic in a data center in Frankfurt or Dublin, you are fighting a losing battle against the speed of light.

As we approach mid-2018, the industry obsession with "Cloud First" is hitting a wall. We are seeing it in industrial automation, in high-frequency trading, and specifically in the Nordic market where connectivity varies wildly between Oslo and the fjords. The solution isn't a bigger cloud; it's moving the compute closer to the data source. This is Edge Computing.

I recently audited a setup for a maritime logistics firm monitoring hull sensors. They were piping raw MQTT data directly to AWS us-east-1. The latency jitter (200ms+) was causing timeouts in their control loops, and the bandwidth bill was horrific. The fix wasn't AI—it was deploying a localized aggregation layer on a sturdy KVM VPS in Oslo.

The Latency Mathematics: Why "The Edge" Matters

Let's look at the numbers. A round trip ping from Northern Norway to Central Europe averages 35-50ms on a good day. Over 4G/LTE, add another 40-100ms. If your application relies on a request-response cycle to make a decision (e.g., "Stop the turbine"), you are looking at a 200ms delay. In industrial contexts, that is an eternity.

By placing an "Edge Node"—essentially a high-performance VPS—in a local data center connected to the NIX (Norwegian Internet Exchange), you cut that WAN latency down to single digits.

Pro Tip: Do not use OpenVZ or LXC for edge nodes that require heavy kernel tuning for networking. You need KVM virtualization (standard on CoolVDS) to modify kernel parameters like tcp_tw_reuse or load custom modules for specific VPN protocols.

Architecture: The "Fog" Layer

In 2018, the most robust stack for an edge aggregator involves three components:

  1. Ingest: Mosquitto (MQTT Broker) or RabbitMQ.
  2. Storage: InfluxDB (Time-series) for temporary buffering.
  3. Processing: Python/Go workers or Node.js.

The goal is to ingest high-frequency noise, aggregate it locally, and only send valuable averages to the central cloud.

1. Tuning the Kernel for High Concurrency

IoT devices maintain open TCP connections. A standard Ubuntu 16.04 server is configured for general web serving, not thousands of persistent sensor connections. You must modify the sysctl config.

Edit /etc/sysctl.conf:

# Increase system-wide file descriptors
fs.file-max = 2097152

# Allow more connections to be handled
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535

# Increase ephemeral ports range
net.ipv4.ip_local_port_range = 1024 65535

# Reuse Time-Wait sockets (Critical for rapid reconnects)
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

Apply these changes immediately:

sysctl -p

2. The Storage Bottleneck: Why NVMe is Non-Negotiable

This is where hardware choice becomes critical. When 5,000 sensors send a heartbeat simultaneously, your Disk I/O Queue spikes. Traditional SSDs (SATA) often choke under high IOPS (Input/Output Operations Per Second) of small random writes, which is exactly what database logging looks like.

At CoolVDS, we enforce NVMe storage for this reason. NVMe talks directly to the CPU via PCIe, bypassing the SATA controller bottleneck. If you are writing to InfluxDB or MongoDB on a standard VPS, you will see iowait shoot up in top, and your CPU will sit idle waiting for the disk.

3. Configuring InfluxDB for Edge Retention

You don't want to store data forever on the edge. You want a rolling buffer. In InfluxDB (v1.4 is the current stable choice), set a strict Retention Policy (RP).

CREATE RETENTION POLICY "one_week" ON "sensor_data" DURATION 7d REPLICATION 1 DEFAULT

This ensures your edge node never runs out of disk space, automatically dropping data older than 7 days. This "self-cleaning" mechanism is essential for remote servers you don't want to babysit.

Data Sovereignty and GDPR (The May 2018 Deadline)

We are months away from the General Data Protection Regulation (GDPR) enforcement date (May 25, 2018). This is causing massive headaches for CTOs relying on US-based giants.

If your edge devices collect Personally Identifiable Information (PII)—and IP addresses or MAC addresses can count as PII under specific contexts—storing that data on a US-controlled server legally exposes you. By utilizing a Norwegian VPS provider like CoolVDS, data stays within the jurisdiction of the EEA/Norway, simplifying compliance with Datatilsynet regulations.

Comparison: Cloud vs. Edge VPS

Feature Centralized Cloud (Frankfurt/London) Local Edge VPS (CoolVDS Oslo)
Latency (from Oslo) 25ms - 40ms < 3ms
Data Cost High (Ingress/Egress fees) Predictable / Unmetered
Hardware Control Shared, opaque KVM / Dedicated Resources
Storage Standard SSD / EBS Local NVMe

Implementation Strategy

For a robust deployment in 2018, I recommend avoiding the overhead of heavy orchestration tools if you only manage a handful of nodes. While Kubernetes is gaining traction, it is overkill for a single edge concentrator.

Stick to Docker CE (17.12) and Docker Compose. It creates a reproducible environment that is easy to update.

Here is a sample docker-compose.yml for an edge node:

version: '3'
services:
  mosquitto:
    image: eclipse-mosquitto:1.4.12
    ports:
      - "1883:1883"
    volumes:
      - ./mosquitto/config:/mosquitto/config
      - ./mosquitto/data:/mosquitto/data
    restart: always

  influxdb:
    image: influxdb:1.4
    volumes:
      - ./influxdb:/var/lib/influxdb
    environment:
      - INFLUXDB_DB=sensors
    restart: always

  nodered:
    image: nodered/node-red-docker:v8
    ports:
      - "1880:1880"
    links:
      - mosquitto
      - influxdb
    restart: always

Final Thoughts

The edge is not about replacing the cloud; it is about protecting it from garbage data and ensuring your local operations survive a network partition. Whether you are running heat pumps in Tromsø or servers in a downtown Oslo office, the latency penalty of leaving the country is real.

Don't let IO wait or network lag dictate your system's performance. Test your edge architecture on hardware that keeps up.

Ready to lower your latency? Deploy a high-performance NVMe KVM instance on CoolVDS today and keep your data in Norway.