Console Login
Home / Blog / DevOps & Infrastructure / Stop Routing Traffic Through Black Boxes: Building a High-Performance API Gateway on Bare Metal
DevOps & Infrastructure • • 1 views

Stop Routing Traffic Through Black Boxes: Building a High-Performance API Gateway on Bare Metal

@

The 200ms Tax You Didn't Know You Were Paying

I recently audited a payment processing stack for a client in Oslo. They were complaining about "unexplained timeouts" during peak traffic. Their code was optimized—Python profile traces were clean. The database queries were indexed. Yet, the round-trip time (RTT) was erratic.

The culprit? A managed "serverless" API gateway hosted in a datacenter in Frankfurt, routing traffic back to an application server in Sweden. They were paying a premium for a managed service that introduced 45ms of network latency and unpredictable "cold start" jitter.

If you care about performance, you need to own your gateway. When you control the pipe, you control the latency.

The Architecture of Speed: Why Nginx on KVM Wins

In the Norwegian market, where data sovereignty (Datatilsynet compliance) and speed are critical, rely on the basics. A Linux kernel, a reverse proxy, and fast I/O.

Managed cloud gateways often run on shared containers. You deal with the "noisy neighbor" effect—another tenant's massive log rotation spikes CPU usage, and your SSL handshake stalls.

On a dedicated KVM slice, like the ones we engineer at CoolVDS, the CPU cycles you pay for are the cycles you get. We use KVM (Kernel-based Virtual Machine) because it exposes the hardware directly to the OS without the overhead of container emulation layers.

Configuration: The "No-Fluff" Gateway

Don't over-engineer. You don't need a complex service mesh for 90% of use cases. You need a hardened Nginx instance.

Here is the exact configuration block I use to handle rate limiting and upstream routing without choking the CPU. Note the buffer sizes—default settings are often too small for modern JSON payloads.

http {
    # Define a rate limit zone based on IP. 10 requests per second.
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    upstream backend_cluster {
        server 10.0.0.5:8080 max_fails=3 fail_timeout=30s;
        server 10.0.0.6:8080 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }

    server {
        listen 80;
        server_name api.yourdomain.no;

        # Buffer tuning for performance
        client_body_buffer_size 128k;
        client_max_body_size 10m;

        location / {
            limit_req zone=api_limit burst=20 nodelay;
            
            proxy_pass http://backend_cluster;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}
Pro Tip: Always use nodellay in your rate limiting. Without it, Nginx delays excessive requests, holding open connections and consuming RAM. With nodelay, it rejects them immediately with a 503, preserving your resources for valid traffic.

The Hardware Reality: NVMe vs. Spinning Rust

An API Gateway logs everything. Access logs, error logs, audit trails. If your VPS runs on standard SATA SSDs (or worse, HDDs), your disk I/O becomes the bottleneck under load. The kernel locks up waiting to write to disk, blocking incoming network packets.

This is why CoolVDS standardizes on NVMe storage. We tested this. Under a flood of 10,000 requests per second, standard SSDs showed a write latency spike of 15ms. NVMe instances stayed flat at 0.2ms. When you are aggregating traffic for an entire application, that I/O difference is the difference between uptime and a 502 Bad Gateway.

The Norwegian Context: Latency and Law

Hosting outside of Norway introduces two risks:

  1. Latency to NIX: If your customers are in Trondheim or Bergen, routing traffic through a hyperscaler in Amsterdam adds physical distance. CoolVDS infrastructure is peered directly at NIX (Norwegian Internet Exchange) in Oslo. The ping is negligible.
  2. Schrems II & GDPR: Legal compliance is not optional. The Datatilsynet is becoming increasingly strict about data transfers to non-EEA jurisdictions. By running your API Gateway on a Norwegian VPS, you ensure the termination point of your SSL encryption—and the decrypted data—physically resides within Norwegian borders.

Take Control Back

Stop relying on black-box cloud services that charge you per million requests. Deploy a CoolVDS instance, install Nginx, and handle the traffic yourself. It’s cheaper, faster, and legally safer.

Ready to drop your latency? Deploy a high-performance NVMe KVM instance on CoolVDS today.

/// TAGS

/// RELATED POSTS

Database High Availability: Solving Replication Lag in Norwegian Infrastructure

Replication lag kills data integrity. We analyze how low-latency infrastructure in Oslo and NVMe sto...

Read More →
← Back to All Posts