Console Login

API Gateway Tuning: Pushing Nginx & HAProxy 1.5 to the Limit in 2014

API Gateway Tuning: Pushing Nginx & HAProxy 1.5 to the Limit

API Gateway Tuning: Pushing Nginx & HAProxy 1.5 to the Limit

Let’s be honest: your default nginx.conf is garbage. It was written for a world where 100 concurrent users was a "load spike" and hard drives spun at 7200 RPM. But we are in late 2014. If you are building APIs for the Norwegian market—or anywhere in Europe—latency is the enemy, and default configurations are the bottleneck.

I recently audited a Magento deployment in Oslo that was timing out during peak hours. The developers blamed the PHP code. The DBAs blamed the query complexity. I looked at the edge router and saw Nginx choking on file descriptors. After twenty minutes of kernel tuning and switching their load balancer to the newly released HAProxy 1.5, we quadrupled their throughput on the exact same hardware.

If you care about raw performance, you stop trusting defaults. You start tuning.

The Kernel: Where Performance Actually Starts

Before we even touch the application layer, we need to talk about the Linux kernel. Most distributions like CentOS 6 or Ubuntu 14.04 ship with conservative network stack settings designed for general-purpose computing, not high-throughput packet switching.

When you operate an API Gateway, you are essentially juggling thousands of TCP sockets. If your kernel runs out of file descriptors or ephemeral ports, your users see a 502 Bad Gateway. Here is the sysctl.conf baseline I use for every high-load node deployed on CoolVDS:

# /etc/sysctl.conf optimizations

# Maximize open file descriptors
fs.file-max = 2097152

# Reuse sockets in TIME_WAIT state for new connections
# Critical for API backends with frequent short-lived connections
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535

# Maximize the backlog queue
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 50000

# Reduce keepalive time to clear dead connections faster
net.ipv4.tcp_keepalive_time = 300

Run sysctl -p to apply these. Without these settings, your fancy Nginx config is just a Ferrari engine inside a Fiat chassis.

Nginx 1.6: The Workhorse

Nginx 1.6 is the current stable standard, and it is a beast if you configure it right. The most common mistake I see is leaving worker_processes at 1. In 2014, even a modest VPS has multiple cores. Use them.

Key Nginx Directives

1. Worker Processes & Connections
Set worker_processes auto; if you are on a modern version, or manually tie it to your core count. More importantly, bump your worker_connections. The default 512 is laughable.

events {
    worker_connections 65535;
    use epoll;
    multi_accept on;
}

2. Buffering & Timeouts
APIs are different from static sites. You generally want to fail fast and disable buffering if your clients are smart enough, but for general web use, tuning buffers prevents disk I/O thrashing.

http {
    # Don't buffer the request body to disk if you can help it
    client_body_buffer_size 128k;
    client_max_body_size 10m;

    # Keepalives are crucial for SSL performance
    keepalive_timeout 65;
    keepalive_requests 100000;

    # Open File Cache - vital for static assets or cached API responses
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
}
Pro Tip: If you are serving users in Norway, enable the stub_status module and graph it. If you see your "Writing" connections spike while CPU is low, you have an I/O bottleneck. This is where standard HDDs fail and why CoolVDS insists on KVM-backed instances with high-performance PCIe SSD storage.

HAProxy 1.5: Native SSL is Finally Here

For years, we had to put Nginx or Stunnel in front of HAProxy because HAProxy didn't speak SSL. That added latency and complexity. With the release of HAProxy 1.5 in June, that era is over. We now have native SSL termination. This is a massive win for simplicity and latency reduction.

Here is a snippet for a high-performance HAProxy 1.5 config that handles SSL offloading and balances traffic to backend API servers:

global
    maxconn 100000
    tune.ssl.default-dh-param 2048

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend api_gateway_https
    bind *:443 ssl crt /etc/ssl/private/api.pem
    reqadd X-Forwarded-Proto:\ https
    default_backend api_servers

backend api_servers
    balance roundrobin
    option httpchk GET /health
    server api01 10.0.0.1:80 check maxconn 5000
    server api02 10.0.0.2:80 check maxconn 5000

This setup removes the Nginx-to-HAProxy hop, shaving precious milliseconds off every request.

The Hardware Reality: Why Virtualization Matters

You can tune software all day, but if your host is stealing CPU cycles or your I/O wait is high, you will lose. In the hosting market right now, there is a lot of noise about "Cloud," but much of it is just oversold OpenVZ containers.

When you are dealing with high-throughput APIs, you suffer from the "Noisy Neighbor" effect. If another customer on the same physical host decides to compile a kernel or run a backup, your API latency spikes. That is unacceptable for mission-critical workloads.

This is why at CoolVDS, we strictly use KVM (Kernel-based Virtual Machine). KVM provides hardware virtualization. Your RAM is your RAM. Your CPU cores are allocated to you. Combined with our Enterprise-grade SSD storage, we eliminate the I/O wait times that plague traditional spinning-disk VPS providers.

Norwegian Data Context

For those of us operating in Oslo or Stavanger, latency to the NIX (Norwegian Internet Exchange) is a key metric. Hosting your API Gateway in a datacenter in Texas when your users are in Trondheim is asking for trouble. Furthermore, with the increasing focus on data privacy (especially concerning the Personal Data Act / Personopplysningsloven), keeping data within national borders or the EEA is becoming a legal necessity, not just a performance one.

Final Thoughts

Performance is a stack. It starts with hardware (SSD is non-negotiable in 2014), moves to the kernel (sysctl tuning), and ends with your application gateway (Nginx/HAProxy). Do not settle for defaults. Test your limits.

If you are ready to stop fighting I/O wait and start serving requests, it is time to upgrade.

Don't let slow hardware kill your API performance. Deploy a KVM-optimized instance on CoolVDS today and experience the difference of pure SSD speed.