The "Default Config" is Your Enemy
If you are serving an API out of Oslo to a European user base, and you are running a standard install of Nginx on a generic cloud instance, you are failing your developers. It is 2016. We have moved past the era of the monolithic LAMP stack into microservices, and in this architecture, the API Gateway is the single point of failure. It is the choke point.
I recently audited a payment processing cluster for a client in Bergen. They were complaining about "random" 502 Bad Gateway errors during traffic spikes. The application logs were clean. The database load was low. The culprit? A default Linux network stack that treated high-concurrency API traffic like it was a file transfer from 1999.
Latency is not just a nuisance; it is a conversion killer. Amazon found that every 100ms of latency cost them 1% in sales. If your API gateway adds 200ms of overhead before the request even hits your backend logic, you are burning money. Here is how we tune the stack, from the metal up.
1. The Hardware Reality Check
Before we touch a config file, let’s talk about the physical layer. Virtualization imposes a tax. If you are on a budget provider using OpenVZ, you are sharing a kernel with everyone else on the node. You cannot tune kernel parameters effectively if you do not own the kernel.
Pro Tip: Always use KVM virtualization for API Gateways. KVM provides full hardware virtualization, allowing you to modify sysctl.conf parameters that container-based solutions block. This is standard on CoolVDS because we refuse to let a "noisy neighbor" steal your entropy or CPU cycles.
Furthermore, disk I/O matters even for gateways (logging, buffering, cache). Spinning rust is dead. Ensure your provider is offering pure SSD storage, preferably over a high-speed interface.
2. Linux Kernel Tuning (`sysctl.conf`)
The Linux kernel defaults are conservative. They are designed to save RAM on machines with 512MB of memory. On a modern CoolVDS instance with 8GB+ RAM, these defaults will strangle your throughput.
Open /etc/sysctl.conf. We need to widen the TCP pipe.
# Increase the maximum number of open files
fs.file-max = 2097152
# Increase the read/write buffer sizes for TCP
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the size of the backlog queue
# The default is often 128, which causes dropped connections during bursts
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
# Reuse sockets in TIME_WAIT state for new connections
# Critical for API gateways handling many short-lived connections
net.ipv4.tcp_tw_reuse = 1
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Protect against SYN flood attacks
net.ipv4.tcp_syncookies = 1
Apply these changes with sysctl -p. The somaxconn setting is particularly vital. If your Nginx is configured to accept 4000 connections but the kernel queue is capped at 128, the kernel will silently drop packets. This manifests as "connection timed out" on the client side, while your server looks idle.
3. Nginx: The Engine
Nginx 1.9.x is the weapon of choice here. If you are still on Apache for an API Gateway, stop reading and migrate. Apache's process-per-connection model cannot scale to the concurrency levels required by modern mobile apps.
However, apt-get install nginx gives you a configuration meant for serving static HTML, not proxying JSON at high velocity.
Worker Configuration
Edit your /etc/nginx/nginx.conf:
worker_processes auto;
worker_rlimit_nofile 100000;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
worker_rlimit_nofile must exceed worker_connections. The multi_accept on directive tells a worker process to accept all new connections at once, rather than one at a time. This is crucial for bursty API traffic.
Upstream Keepalive
This is the most common mistake I see. Nginx speaks HTTP/1.0 to the backend by default. This means for every API request, Nginx opens a new TCP connection to your backend application (Node.js, Go, PHP-FPM), sends the request, and closes the connection.
The TCP handshake overhead will destroy your latency numbers. You must enable keepalive to the upstream.
upstream backend_api {
server 10.0.0.5:8080;
# Keep 64 idle connections open to the backend
keepalive 64;
}
server {
location /api/ {
proxy_pass http://backend_api;
# Required for keepalive to work
proxy_http_version 1.1;
proxy_set_header Connection "";
# Pass headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
By reusing connections, we often see internal latency drop from 30ms to 2ms.
4. SSL/TLS Termination: The Heavy Lifting
With the recent death of Safe Harbor and the looming EU data protection regulations, encryption is non-negotiable. But SSL is CPU expensive. The initial handshake consumes significant resources.
To mitigate this, we rely on Session Resumption and proper cipher suites. Ensure your OpenSSL version supports the latest ciphers.
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Prioritize Elliptic Curve ciphers for performance
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
A 10MB cache (shared:SSL:10m) can store approximately 40,000 sessions. This allows returning clients (like a mobile app making sequential requests) to skip the heavy handshake and perform an abbreviated one.
5. The Norway Factor: Network Topology
Physics is stubborn. If your servers are in Frankfurt but your users are in Trondheim, you are fighting the speed of light. The round-trip time (RTT) creates a hard floor on your API performance.
Hosting in Norway, specifically connected to NIX (Norwegian Internet Exchange), ensures that traffic stays local. It keeps data within Norwegian jurisdiction—a critical selling point for enterprise clients nervous about US surveillance post-Schrems I.
| Feature | Generic Cloud | CoolVDS Norway |
|---|---|---|
| Virtualization | Often Shared Kernel (Container) | Dedicated KVM |
| Storage | Network Storage (High Latency) | Local SSD RAID-10 |
| Kernel Access | Restricted | Full Root / Custom Kernel |
| Data Location | Undefined / Europe | Oslo, Norway |
Conclusion
Optimizing an API gateway is an exercise in removing bottlenecks. You start with the hardware (SSD, CPU), tune the kernel to handle the connection volume, and configure Nginx to manage the traffic flow efficiently.
But software tuning can only take you so far. If the underlying infrastructure is oversubscribed, your sysctl tweaks are useless. You need guaranteed resources.
Stop fighting noisy neighbors. Deploy your gateway on a platform built for performance.
Ready to lower your TTFB? Spin up a high-performance KVM instance on CoolVDS today and experience the difference of local Norwegian peering.