Console Login

Scaling API Gateways in 2016: Kernel Tuning & Nginx Optimization for Nordic Latency

Stop Letting Default Configs Throttle Your API

If you represent a service targeting the Nordic market, you know the geography is unforgiving. Packets traveling from a user in Alta to a data center in Frankfurt face unavoidable physical latency. If you are hosting your API gateway outside of Norway, you are already starting with a handicap. But even if you have wisely moved your infrastructure to Oslo, there is a silent killer in your stack: default configurations.

Most Linux distributions and reverse proxies ship with settings designed for compatibility, not high-performance throughput. I recently audited a payment gateway for a local fintech startup experiencing timeouts during peak loads. Their code was fine. Their database was decent. But their API gateway (Nginx) was choking on ephemeral port exhaustion and unnecessary SSL handshakes.

In this post, we are going to tear down the default stack and rebuild it for the specific constraints of 2016's hardware and network realities.

1. The OS Layer: Tuning the Kernel for Concurrency

Before touching Nginx or HAProxy, look at sysctl. By default, a Linux server is tuned for a desktop or a low-traffic file server. When you hit 5,000 concurrent connections, the kernel panics and starts dropping packets.

In a high-load API scenario, you run out of file descriptors and TCP sockets fast. Here is the production-hardened /etc/sysctl.conf configuration we use at the base of our CoolVDS templates for high-performance nodes:

# Increase system file descriptor limit
fs.file-max = 2097152

# Increase the read/write buffers for TCP
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# Increase the max number of backlog connections
net.core.somaxconn = 65535

# Reuse sockets in TIME_WAIT state for new connections
# Critical for API gateways making frequent upstream calls
net.ipv4.tcp_tw_reuse = 1

# Port range expansion
net.ipv4.ip_local_port_range = 1024 65535

Apply this with sysctl -p. The tcp_tw_reuse flag is controversial in some circles, but for an internal API gateway sitting behind a load balancer, it is absolutely necessary to prevent port exhaustion.

2. Nginx: The "Keep-Alive" Trap

Most DevOps engineers configure Nginx as a reverse proxy and forget one crucial detail: Upstream Keep-Alive. By default, Nginx opens a new connection to your backend application (Node.js, Go, PHP-FPM) for every single request. This creates massive overhead.

You need to keep that pipe open. Here is how you configure your upstream block correctly:

upstream backend_api {
    server 10.0.0.5:8080;
    # Maintain 64 idle connections to the upstream
    keepalive 64;
}

server {
    location /api/ {
        proxy_pass http://backend_api;
        
        # REQUIRED for keepalive to work
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}
Pro Tip: If you are using SSL termination at the gateway (which you should be), ensure you are using session tickets. The handshake is the most expensive part of the request. With the recent rise of Let's Encrypt in 2016, encryption is free, but CPU cycles are not.

3. The Hardware Bottleneck: Why IOPS Matter

You can tune your software until you are blue in the face, but you cannot code your way out of slow hardware. API gateways log heavily. Access logs, error logs, audit trails. If your VPS is running on spinning rust (HDD) or even cheap SATA SSDs shared with 50 other tenants, your disk I/O wait times will skyrocket.

This is where virtualization architecture becomes critical. Many providers use OpenVZ, where you share the kernel with noisy neighbors. If another tenant decides to compile a kernel or run a backup, your API latency spikes.

Comparison: Storage Tech in 2016

Storage Type Avg IOPS Latency Impact
Standard HDD (7.2k) 80-120 High (Blocking)
SATA SSD (Standard) 5,000-10,000 Medium
NVMe (CoolVDS Standard) 20,000+ Near Zero

At CoolVDS, we enforce KVM virtualization. This means you get a dedicated kernel and reserved resources. We also rolled out NVMe storage across our Oslo nodes earlier this year. For an API gateway pushing logging to disk, NVMe is not a luxury; it is a requirement to maintain sub-100ms response times.

4. Local Nuances: NIX and Privacy

Hosting in Norway isn't just about speed; it's about sovereignty. With the invalidation of the Safe Harbor agreement last year and the new Privacy Shield framework just getting started, data residency is a hot topic. The Datatilsynet (Norwegian Data Protection Authority) is becoming increasingly strict.

By keeping your API gateway and database in Oslo, you reduce legal friction. Furthermore, CoolVDS peers directly at NIX (Norwegian Internet Exchange). If your users are on Telenor or Telia networks, their requests reach your server in single-digit milliseconds, bypassing international routes entirely.

5. HTTP/2 is Here. Use It.

Nginx added support for HTTP/2 recently. If your API clients (mobile apps, browsers) support it, this is an instant performance win due to header compression and multiplexing. You don't need domain sharding anymore.

server {
    listen 443 ssl http2;
    server_name api.yourdomain.no;
    
    ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_prefer_server_ciphers on;
}

Note the inclusion of CHACHA20. This cipher is much faster on mobile devices that lack hardware AES acceleration (common in older Android phones still prevalent in the market).

Summary

Performance is a stack. It starts with the hardware (NVMe), moves to the kernel (sysctl), and finishes with the application config (Nginx). Neglect one, and the others suffer.

Don't let legacy infrastructure throttle your growth. If you need a KVM instance that can actually handle the sysctl tuning discussed above without hitting "resource limit" errors, you know where to look.

Ready to test your latency? Deploy a CoolVDS instance in Oslo today and run wrk against your current provider. The numbers won't lie.