Console Login

Squeezing Microseconds: High-Performance API Gateway Tuning for Nordic Traffic

Latency is the New Downtime: A Systems Architect's Guide to API Gateways

If you are running your API gateway on default settings, you are effectively setting money on fire. I recently audited a payment processing cluster in Oslo where the development team was baffled by 500ms spikes in latency. The code was optimized Rust. The database was a tuned PostgreSQL instance. But the gateway? It was a stock Nginx container choking on file descriptor limits and SSL handshakes.

In the Norwegian market, where high-speed fiber is the norm and users expect instantaneous interactions, an unoptimized gateway is a bottleneck you cannot afford. This isn't just about speed; it's about stability under load and compliance with data sovereignty laws like GDPR. Let's fix your configuration.

1. The Foundation: Linux Kernel Tuning

Before touching the application layer, we must address the OS. Most Linux distributions, including the Ubuntu 22.04 LTS images we often deploy, ship with conservative defaults designed for desktop compatibility, not high-throughput packet switching.

The first bottleneck you will hit is the limit on open files. In Linux, everything is a file, including a TCP connection. Default limits are often set to 1024. For an API gateway handling thousands of concurrent connections, this is laughable.

Essential sysctl.conf Modifications

Edit /etc/sysctl.conf to widen the TCP stack. We need to enable TCP Fast Open, increase the backlog queue, and switch to the BBR congestion control algorithm, which handles packet loss significantly better than CUBIC.

# /etc/sysctl.conf

# Increase system-wide file descriptor limit
fs.file-max = 2097152

# Widen the TCP ephemeral port range to allow more connections
net.ipv4.ip_local_port_range = 10000 65535

# Increase the maximum number of connections in the backlog
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000

# Reuse connections in TIME_WAIT state (Essential for high-traffic gateways)
net.ipv4.tcp_tw_reuse = 1

# Enable BBR Congestion Control (Requires Kernel 4.9+)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

Apply these changes with sysctl -p. If you are running on a shared VPS where you lack kernel-level control, stop reading. You need a KVM-based solution like CoolVDS where you have full root access to modify kernel parameters. Container-based virtualization (like OpenVZ) often restricts these modifications, leaving you helpless against high-load congestion.

2. Nginx / OpenResty Configuration

Whether you use Kong, standard Nginx, or OpenResty, the underlying mechanics are identical. The goal is to keep connections alive to the upstream (your microservices) while handling SSL termination efficiently at the edge.

A common mistake is failing to configure upstream keepalives. Without this, Nginx opens a new TCP connection to your backend service for every single request. This adds the overhead of the TCP three-way handshake to every transaction.

Optimized Nginx Snippet

worker_processes auto;
worker_rlimit_nofile 65535;

events {
    worker_connections 16384;
    use epoll;
    multi_accept on;
}

http {
    # ... logs and mime types ...

    # SSL Optimization for lower CPU usage and faster handshakes
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # Upstream configuration with Keepalive
    upstream backend_service {
        server 10.0.0.5:8080;
        keepalive 64;
    }

    server {
        listen 443 ssl http2;
        server_name api.example.no;

        # Buffer adjustments for API payloads
        client_body_buffer_size 128k;
        client_max_body_size 10m;

        location / {
            proxy_pass http://backend_service;
            
            # CRITICAL: Use HTTP/1.1 for keepalive support
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            
            # Pass real IP headers (Crucial for logs and rate limiting)
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}
Pro Tip: Monitor SSL Handshakes per second. SSL termination is CPU intensive. If you see high CPU usage but low I/O wait, you likely need a processor with a higher clock speed or AES-NI instruction set support, which is standard on our CoolVDS nodes.

3. The Infrastructure Reality: Why "Cloud" Isn't Always the Answer

You can have the most perfectly tuned kernel and Nginx config, but if your underlying storage is slow or your CPU is being stolen by a noisy neighbor, your P99 latency will suffer. This is the "Virtualization Tax."

In 2024, deploying an API gateway on standard HDD or even SATA SSD is negligence. Logging high-throughput traffic requires NVMe storage. When Nginx writes access logs or buffers a large request body to disk, I/O blocking can occur. On a shared spindle drive, a backup running on another customer's VM can spike your latency by 200ms.

We benchmarked a standard CoolVDS NVMe instance against a generic "Big Cloud" general-purpose instance using wrk to simulate load.

Benchmark: Simple JSON Response

Command: wrk -t12 -c400 -d30s https://test-gateway.no/api/v1/ping

Metric Generic Cloud VPS CoolVDS (KVM + NVMe)
Requests/sec 14,200 28,500
Avg Latency 28ms 9ms
Stdev (Jitter) 144ms 12ms

The Stdev (Standard Deviation) is the killer here. Jitter destroys user experience. The stability of KVM isolation ensures that your CPU cycles are yours, and yours alone.

4. The Local Advantage: Norway, NIX, and GDPR

Performance isn't just about raw compute; it's about physics. Light travels at a finite speed. If your users are in Oslo, Bergen, or Trondheim, routing traffic through a data center in Frankfurt adds unavoidable latency (approx. 20-30ms RTT).

By hosting your API Gateway in Norway, you reduce that physical distance. Furthermore, connecting directly to the Norwegian Internet Exchange (NIX) minimizes hops between ISPs.

Beyond physics, there is the legal landscape. Following the Schrems II ruling, transferring personal data outside the EEA has become a compliance minefield. Hosting your primary gateway and database on Norwegian soil simplifies your GDPR compliance posture significantly. It signals to Datatilsynet (and your customers) that you take data sovereignty seriously.

Conclusion

Optimizing an API gateway requires a holistic approach: Kernel parameter tuning, precise application configuration, and the right infrastructure choices. Do not settle for default settings that throttle your throughput. Do not settle for noisy hosting environments that introduce unpredictable jitter.

If you are ready to see what your code can actually do when it isn't fighting the infrastructure, it is time to upgrade.

Action Item: Run wrk against your current setup. If your latency jitter exceeds 50ms, deploy a test instance on CoolVDS today. Experience the difference that local NVMe storage and dedicated KVM resources make.