RIP TCP: Why We Are switching to QUIC (HTTP over UDP) for Low-Latency Mobile Delivery
If you are still relying solely on TCP for your transport layer in 2018, you are leaving performance on the table. I've spent the last week auditing the latency logs for a major Norwegian e-commerce client, and the results were frustratingly consistent. Their backend logic (running on PHP 7.2) is optimized to death, executing in under 45ms. Yet, users in Tromsø on 4G connections were seeing First Contentful Paint times upwards of 1.5 seconds.
The culprit isn't the code. It isn't the database. It's the protocol.
TCP is a dinosaur. Designed in the 1970s for wired networks where packet order was sacrosanct, it fails miserably on modern mobile networks where packet loss is a feature, not a bug. Enter QUIC (Quick UDP Internet Connections). It is the Google-born protocol that moves HTTP semantics over UDP, and it is the single biggest upgrade to web performance since we moved from HTTP/1.1 to HTTP/2.
The Problem: Head-of-Line Blocking
We all jumped on the HTTP/2 bandwagon for multiplexing. It promised to fix the browser connection limit issue. And it did. But it introduced a nasty side effect: TCP Head-of-Line (HoL) Blocking.
In HTTP/2, you stream multiple requests over a single TCP connection. If one TCP packet gets dropped (common on mobile networks when switching towers or entering a tunnel), the operating system's TCP stack halts the entire stream until that one packet is retransmitted. Your CSS, JS, and images all stop loading because of one missing byte. This defeats the purpose of multiplexing.
QUIC solves this by moving to UDP. If a packet is lost, only the stream associated with that packet waits. The rest of the page keeps loading. For a user commuting through Oslo's subway tunnels, this difference is perceptible.
The "War Story": Enabling QUIC in Production
Implementing QUIC in July 2018 isn't straightforward. Nginx (my usual go-to) does not support it natively yet. You can patch it with Cloudflare's Quiche or Google's libquic, but that's a maintenance nightmare for production systems.
For this deployment, we swapped the edge layer to Caddy 0.11, which has experimental but robust QUIC support enabled by default. However, simply installing the software wasn't enough. We hit a wall immediately: the server's UDP buffers were too small, leading to dropped packets at the kernel level before they even reached the application.
Step 1: Kernel Tuning for UDP
Linux is tuned for TCP by default. To handle high-volume QUIC traffic, you must increase the receive and send window sizes for UDP. On our CoolVDS NVMe instances, we apply the following sysctl configurations to ensure the network stack doesn't become the bottleneck:
# /etc/sysctl.conf
# Increase UDP buffer sizes for high-speed QUIC transfers
net.core.rmem_max = 2500000
net.core.wmem_max = 2500000
# Increase default buffer limits
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
# Optimize for low latency
net.ipv4.udp_mem = 65536 131072 262144
Reload the settings with:
sysctl -p
Step 2: The Firewall Trap
A rookie mistake is configuring the web server but forgetting the firewall. Most default policies allow TCP port 443 but block UDP port 443. If UDP is blocked, the browser will silently fall back to TCP (after a timeout delay), making your site slower than before. Explicitly allow UDP traffic.
# If using UFW (Uncomplicated Firewall)
sudo ufw allow proto udp from any to any port 443
# Or raw iptables
iptables -A INPUT -p udp --dport 443 -j ACCEPT
Step 3: Caddy Configuration
With the system tuned, the Caddyfile configuration is deceptively simple. Unlike Apache's XML hell, Caddy handles certificate generation (Let's Encrypt) and QUIC negotiation automatically.
example.no {
root /var/www/html
gzip
# QUIC is enabled by default in 0.11, but ensure TLS is on
tls admin@example.no
# Add header to advertise QUIC support to the browser
header / {
Alt-Svc "quic=\":443\"; ma=2592000; v=\"44,43,39\""
}
}
Pro Tip: The Alt-Svc header is crucial. It tells Chrome (and other supported browsers) that "Hey, I speak QUIC on UDP port 443." Without this, the browser will continue using HTTP/1.1 or HTTP/2 over TCP.
Infrastructure Matters: The CoolVDS Advantage
Here is the uncomfortable truth about hosting in 2018: Many VPS providers actively throttle or block UDP traffic.
Why? Because UDP is the protocol of choice for DDoS amplification attacks (NTP reflection, Memcached reflection). Lazy hosting providers simply rate-limit UDP to mitigate risk. If you try to run a high-traffic QUIC site on a budget host, you will hit these limits, and your users will experience packet loss—ironically causing the exact problem we are trying to solve.
At CoolVDS, we take a different approach. We don't blanket-throttle UDP. Our network edge in Oslo uses advanced traffic scrubbing that distinguishes between legitimate QUIC streams and DDoS signatures. This allows us to offer:
- Unrestricted UDP I/O: Essential for QUIC and future real-time applications (WebRTC).
- Low Latency Peering: Direct routing to NIX (Norwegian Internet Exchange) ensures your packets don't take a detour through Frankfurt to get from Oslo to Bergen.
- NVMe Storage: With QUIC removing the network bottleneck, the bottleneck shifts to disk I/O. Our NVMe arrays ensure your static assets are read faster than the network can send them.
Verifying the Protocol
Once deployed, don't just trust it works. Open Chrome, install the "HTTP/2 and SPDY indicator" extension (it supports QUIC), or check the protocol via `chrome://net-internals/#quic`. You should see active sessions listed with the version `Q043` or `Q044`.
On the server side, verify listening ports:
# Check for UDP listener on port 443
netstat -ulnp | grep :443
# Output should look like:
udp6 0 0 :::443 :::* 8211/caddy
The Verdict
We are on the cusp of a major shift. The IETF is currently standardizing "HTTP over QUIC," and it's likely to become the next official HTTP version. By adopting it now, you aren't just speeding up your site; you are future-proofing your infrastructure against the latency inherent in mobile networks.
Don't let legacy TCP choke your application's potential. Speed is a feature. If you want to test QUIC without the "noisy neighbor" interference common on shared hosting, spin up a CoolVDS NVMe instance. The network stack is ready.