Console Login

API Gateway Performance: Tuning Nginx for Millisecond Latency in Norway

API Gateway Performance: Tuning Nginx for Millisecond Latency in Norway

Let’s be honest: your default nginx.conf is garbage. If you are serving APIs to mobile clients across Scandinavia using a stock configuration on a budget VPS, you are hemorrhaging users. In 2014, users don't wait. If your JSON payload takes 400ms to arrive because of a sloppy TCP handshake or disk I/O wait, your app is effectively broken.

I’ve spent the last month debugging a high-traffic API for a client based in Oslo. They were seeing random latency spikes despite low CPU usage. The culprit? A combination of poor kernel defaults and the "noisy neighbor" effect common in cheap container-based hosting. Here is how we fixed it, and how you can tune your stack to handle the Nordic traffic surges.

1. The Foundation: Kernel Tuning

Before touching the web server, we need to look at the Linux kernel. Most distributions, including the recently released CentOS 7, ship with conservative networking defaults designed for general-purpose computing, not high-throughput API gateways. When you have thousands of ephemeral connections hitting your API, you will run out of file descriptors or hit TCP timewait limits fast.

Open your /etc/sysctl.conf. We need to widen the TCP port range and allow the reuse of sockets in the TIME_WAIT state. This is critical for REST APIs where clients (and your upstream backend connections) open and close sockets rapidly.

# /etc/sysctl.conf

# Maximize the number of open file descriptors
fs.file-max = 2097152

# Widen the port range
net.ipv4.ip_local_port_range = 1024 65535

# Reuse sockets in TIME_WAIT state for new connections
# Critical for high-request API backends
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

# Increase the backlog for incoming connections
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_max_syn_backlog = 65535

Apply these with sysctl -p. If you are on a shared container host (like OpenVZ), you might find these are locked. This is why we exclusively use KVM virtualization at CoolVDS. You need your own kernel to do serious tuning.

2. Nginx: The Gateway Config

Nginx 1.6 is currently the industry standard for this role. It is event-driven and eats Apache for breakfast when it comes to concurrency. However, for an API Gateway, we aren't just serving static assets; we are proxying requests. The keepalive directive is often misunderstood here.

You need keepalives on both sides: the client facing side (to reduce SSL handshake overhead) and the upstream side (to save the cost of opening connections to your backend application servers).

# /etc/nginx/nginx.conf

worker_processes auto;
worker_rlimit_nofile 100000;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

http {
    # ... standard logs ...

    # API Optimization
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    
    # Keepalive is cheaper than a new handshake
    keepalive_timeout 30;
    keepalive_requests 100000;

    upstream backend_api {
        server 127.0.0.1:8080;
        # Maintain a pool of open connections to the app server
        keepalive 64;
    }

    server {
        listen 80;
        listen 443 ssl;
        
        location /api/ {
            proxy_pass http://backend_api;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            
            # Buffer tuning for JSON payloads
            proxy_buffers 16 16k;
            proxy_buffer_size 32k;
        }
    }
}
Pro Tip: Setting proxy_http_version 1.1 and clearing the Connection header is mandatory for upstream keepalive. If you miss this, Nginx closes the backend connection after every request, doubling your latency.

3. The SSL/TLS Tax

With the recent POODLE and Heartbleed vulnerabilities, security is top of mind. However, SSL handshakes are CPU expensive. In Norway, where privacy is enforced by Datatilsynet, you cannot skip encryption. But you can optimize it.

Ensure you are using Session Resumption. This allows clients to reuse previous SSL parameters, skipping the heavy cryptographic lifting on subsequent requests.

ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Drop SSLv3 immediately
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:kEDH+AESGCM';
ssl_prefer_server_ciphers on;

4. The Hardware Reality: Why I/O Kills APIs

You might think APIs are just CPU and RAM. You are wrong. Logging access, writing to databases, and even temporary file buffering all hit the disk. On standard spinning rust (HDD), a sudden spike in traffic causes I/O wait (iowait) to skyrocket. Your CPU sits idle waiting for the disk, and your API latency jumps from 50ms to 500ms.

This is where the infrastructure choice becomes political. Your CFO wants cheap hosting. You want performance.

Feature Standard VPS (HDD) CoolVDS (Pure SSD/NVMe)
Random Read IOPS ~100-200 ~10,000+
Disk Latency 5-10ms < 0.1ms
Impact on API Micro-stalls during logging Zero blocking

We are starting to see NVMe drives enter the enterprise space this year. While still expensive, they bypass the legacy SATA controller bottlenecks. At CoolVDS, we are aggressive about adopting this. For a database-heavy API, the difference isn't just speed; it's consistency.

5. Local Connectivity: The NIX Factor

If your users are in Oslo, Bergen, or Trondheim, hosting in Frankfurt adds 20-30ms of latency simply due to the speed of light and routing hops. Physics always wins.

For Norwegian workloads, you need a provider peered at NIX (Norwegian Internet Exchange). Routing local traffic through Sweden or Germany is unnecessary overhead. We ensure our routes stay local where possible, keeping that ping time single-digit for your Nordic user base.

Final Thoughts

Tuning an API gateway is an exercise in removing bottlenecks. You start with the kernel, move to the Nginx configuration, and finally, ensure the underlying metal isn't sabotaging you.

Don't let legacy infrastructure dictate your application's performance. If you need a sandbox to test these configurations, spin up a high-performance instance. Don't let slow I/O kill your SEO.

Deploy a test instance on CoolVDS in 55 seconds and see the difference raw KVM power makes.