Squeezing Milliseconds: Tuning Nginx as an API Gateway for Low-Latency Norwegian Traffic
Let’s be honest: default configurations are for hobbyists. If you are running a high-throughput API in 2016, relying on apt-get install nginx without touching the internals is professional negligence. I recently audited a payment processing stack for a client in Oslo. They were blaming their database for 500ms latency spikes, but the DB was sleeping. The culprit? A default Nginx reverse proxy choking on SSL handshakes and TCP connection overhead.
In the wake of the Safe Harbor ruling invalidating US data transfers last October, more of us are pulling infrastructure back to European soil. But moving to a local datacenter isn't a magic fix if your software stack is unoptimized. Latency isn't just network distance; it's processing time.
This guide assumes you are running a modern Linux distro (like Ubuntu 14.04 LTS or CentOS 7) and are ready to get your hands dirty with kernel flags. We will focus on Nginx 1.9.x, as it introduces the game-changing HTTP/2 support that makes REST APIs significantly snappier.
1. The Kernel: Open the Floodgates
Before requests even hit Nginx, the Linux kernel has to accept the TCP connection. On a standard VPS, the limits are conservative. For an API gateway handling thousands of concurrent connections, these defaults are effectively a DDoS attack against yourself.
Edit your /etc/sysctl.conf. We need to increase the backlog of incoming connections and allow the system to recycle TIME_WAIT sockets faster. This is critical for REST APIs where clients (especially mobile apps) open and close connections rapidly.
# /etc/sysctl.conf
# Increase system-wide file descriptors
fs.file-max = 2097152
# Allow more connections to be queued
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_max_syn_backlog = 5000
# Reuse sockets in TIME_WAIT state for new connections
# Essential for high-traffic API endpoints
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
# Increase port range for outgoing connections (upstream)
net.ipv4.ip_local_port_range = 1024 65535
Apply these with sysctl -p. If you are on a restrictive container platform (like older OpenVZ implementations), these settings might fail because you share a kernel. This is why at CoolVDS, we exclusively use KVM. You get your own kernel. You set your own limits. No noisy neighbors.
2. Nginx Configuration: The Worker Paradigm
Nginx uses an event-driven architecture. The worker_processes directive should generally match your CPU core count. However, the real magic happens in the events block. We need to switch on multi_accept to tell the worker to accept as many connections as possible, rather than just one at a time.
Here is a production-ready snippet for /etc/nginx/nginx.conf tailored for API workloads:
worker_processes auto;
worker_rlimit_nofile 100000;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
# ... logs and mime types ...
# OPTIMIZATION: Buffer Sizes
# APIs often send small JSON payloads. Tuning buffers prevents disk I/O.
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# OPTIMIZATION: Keepalive
# Keep connections open to upstream backends (Node.js/Go/PHP)
keepalive_timeout 30;
keepalive_requests 100000;
# Disable standard logging for static assets or health checks to save I/O
access_log off;
# ...
}
Pro Tip: If you are proxying to a backend like Node.js or Go, use anupstreamblock with thekeepalivedirective. Nginx defaults to HTTP/1.0 for upstream connections, which closes the socket after every request. Force HTTP/1.1 to reuse the connection.
upstream api_backend {
server 127.0.0.1:3000;
keepalive 64;
}
server {
location /api/ {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
3. The HTTP/2 Revolution
Released in 2015, HTTP/2 is the most significant update to the web protocol in over a decade. For APIs, its binary framing and header compression (HPACK) are massive wins. If your API clients (mobile apps or single-page apps) support it, you can reduce latency by avoiding the head-of-line blocking that plagued HTTP/1.1.
Nginx 1.9.5+ supports this natively. You don't need SPDY anymore. Configuring it is absurdly simple, provided you have SSL set up (browsers require encryption for HTTP/2):
server {
listen 443 ssl http2;
server_name api.yourdomain.no;
ssl_certificate /etc/letsencrypt/live/api.yourdomain.no/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.no/privkey.pem;
# Strong Ciphers for 2016 Security Standards
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_protocols TLSv1.1 TLSv1.2;
}
4. Data Sovereignty and Infrastructure
Performance is physics, but it's also geography. If your users are in Stavanger or Bergen, routing traffic through a "cloud" region in Frankfurt or Dublin adds 20-40ms of round-trip time (RTT) purely due to the speed of light and fiber switching.
By hosting on CoolVDS infrastructure in Oslo, you are peering directly at NIX (Norwegian Internet Exchange). The latency drop is noticeable. Furthermore, with the recent uncertainty around US tech giants and the Datatilsynet (Norwegian Data Protection Authority) watching closely, keeping data within Norwegian borders is the safest bet for compliance.
Finally, a note on I/O. API logging can be heavy. If you are writing access logs to a standard spinning HDD, your iowait will skyrocket during traffic spikes, blocking Nginx workers. We equip all CoolVDS instances with enterprise NVMe storage. In our benchmarks, this results in a 4x reduction in write latency compared to standard SSDs.
Summary
Tuning an API gateway is about removing friction. You open the kernel (sysctl), you widen the lanes (Nginx workers/keepalive), and you upgrade the protocol (HTTP/2). But software tuning only goes so far if the hardware beneath it is oversubscribed.
Don't let a slow hypervisor kill your optimization efforts. Deploy a KVM instance on CoolVDS today and see what your API is actually capable of.