The Speed of Light is Your Biggest Competitor
If you are serving content to a user in Tromsø from a data center in Frankfurt, you are fighting physics, and physics always wins. In high-frequency trading or real-time bidding, milliseconds are money. Even for a standard Magento e-commerce stack, a 100ms delay in Time to First Byte (TTFB) correlates directly with cart abandonment.
We talk a lot about "The Cloud," but the reality is that centralized clouds are slow for the edges of the network. Norway is an edge case—literally. The geography is rugged, and the distance to continental Europe is significant. Relying on a centralized US-East or even a generic EU-West availability zone is a strategic error.
The solution isn't just "buy a CDN." Public CDNs share resources. Noisy neighbors on a shared edge node can wreck your consistency. The solution is building your own Edge Point of Presence (POP) using lightweight, high-performance VPS nodes. Here is how we engineer it.
The Architecture: Distributed Varnish Nodes
In 2014, the most robust way to handle this is not efficient monoliths, but a distributed caching layer. We place a lightweight Varnish 4.0 instance in Oslo (connected via NIX - the Norwegian Internet Exchange) to terminate the connection as close to the user as possible.
This "Edge VPS" does three things:
- Terminates SSL: Offloads the handshake from the backend (though Varnish needs Nginx or Hitch for this).
- Caches Static Assets: Serves images/CSS from RAM.
- Keep-Alive: Maintains a persistent connection to the heavy backend (database/app server).
1. The Kernel Tuning (sysctl.conf)
Default Linux distributions (CentOS 6.5 or Ubuntu 14.04 LTS) are tuned for compatibility, not high-throughput edge serving. You need to modify the TCP stack. The following settings are mandatory for any node facing the public internet with high concurrency.
Edit /etc/sysctl.conf:
# Increase system file descriptor limit
fs.file-max = 2097152
# TCP Hardening and Performance
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.netfilter.ip_conntrack_max = 3240000
net.ipv4.tcp_fin_timeout = 15
# Reuse connections in TIME_WAIT state (Careful with NAT)
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
# Increase the read/write buffer sizes for high latency links
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216Apply this with sysctl -p. Note the tcp_tw_recycle flag; while controversial, in a pure edge load balancer scenario in 2014, it is often necessary to prevent port exhaustion during DDoS attacks or massive traffic spikes.
2. Varnish 4.0 Configuration
Varnish 4.0 was released this April (2014), and the VCL syntax has changed significantly from 3.0. If you are still running 3.0, upgrade. The threading model is better. Here is a VCL snippet to handle the "Edge" logic—gracefully handling backend failures.
If your main backend goes down, the Edge node in Oslo should serve stale content rather than an error. This is crucial for reliability.
vcl 4.0;
backend default {
.host = "10.0.0.5"; # Your heavy backend IP
.port = "8080";
.probe = {
.url = "/health.check";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
sub vcl_backend_response {
# Cache content for 1 hour by default
set beresp.ttl = 1h;
# Allow serving stale content for 6 hours if backend is sick
set beresp.grace = 6h;
}Storage: Why Spindles Are Dead
If you are building an edge node that caches to disk (for large video files or extensive catalogs), I/O wait is the metric that will get you fired. Traditional SAS 15k RPM drives cannot handle the random read patterns of a busy cache node.
We ran a benchmark comparing standard SATA SSDs against the emerging PCIe Flash (NVMe) technology available on select enterprise clusters.
| Storage Type | Random Read (IOPS) | Latency |
|---|---|---|
| 7.2k RPM HDD | ~80-100 | ~12ms |
| Enterprise SATA SSD | ~50,000 | ~0.5ms |
| CoolVDS NVMe/PCIe | ~400,000+ | <0.1ms |
At CoolVDS, we have started rolling out NVMe-based instances because the bottleneck has shifted from the CPU to the storage controller. When you are pushing 1Gbps of traffic, you cannot wait for a disk head to spin.
The "Datatilsynet" Factor
Operating in Norway involves strict adherence to the Personal Data Act (Personopplysningsloven). Unlike US-based hosting where data sovereignty is a grey area, hosting on a Norwegian VPS gives you legal clarity. Your data stays within the jurisdiction.
Pro Tip: If you are logging IP addresses in Nginx for analytics, you are processing personal data. Configure Nginx to mask the last octet if you do not strictly need it for security.
map $remote_addr $ip_anonymized {
default 0.0.0.0;
"~(?P<ip>(\d+)\.(\d+)\.(\d+))\.\d+" $ip.$1.$2.0;
}Why KVM Over OpenVZ?
Many "budget" VPS providers in the Nordics use OpenVZ. This creates a container on a shared kernel. It is efficient for the host, but terrible for the "Performance Obsessive." If your neighbor gets DDoS'd, your kernel tables fill up. The beancounters limit hit.
We use KVM (Kernel-based Virtual Machine). You get your own kernel. You get dedicated memory segments. You can load your own custom kernel modules if you need specific TCP congestion control algorithms like TCP Hybla (useful for high-latency satellite links in Northern Norway).
Deploying the Edge
Stop settling for 200ms latency from Amsterdam. If your market is Norway, your servers should be in Norway. The combination of Nginx for termination, Varnish 4.0 for caching, and KVM-backed NVMe storage provides a foundation that can handle thousands of concurrent connections on a single node.
Don't let slow I/O kill your SEO. Deploy a test instance on CoolVDS in 55 seconds and see the difference a local ping makes.