Console Login

HTTP/1.1 is Dead: Why SPDY & The Draft HTTP/2 Protocol Define the Future of High-Performance Hosting

Stop Letting a 15-Year-Old Protocol Throttle Your Bandwidth

It is 2014. We are building complex, asset-heavy web applications, yet we are delivering them over a transport layer designed in 1999. If you are still relying on standard HTTP/1.1 for a high-traffic site, you aren't just losing milliseconds; you are losing users. While the IETF is currently finalizing the HTTP/2 specification (based largely on Google's SPDY), the technology isn't science fiction. It is here, it is stable, and if you aren't using it, you are practically serving your content over a 56k modem compared to the competition.

As a systems architect who has spent too many nights debugging latency spikes on the Norwegian Internet Exchange (NIX), I can tell you that throwing more RAM at a server won't fix a protocol bottleneck. The problem is Head-of-Line (HOL) blocking. Your browser opens six TCP connections (the limit for Chrome and Firefox) and if one request hangs, everything behind it waits. It is inefficient, it is archaic, and it is costing you money.

In this analysis, we are going to look at how to implement SPDY 3.1 right now using Nginx 1.6, optimizing your SSL stack for the "strict encryption" requirements favored by Datatilsynet, and why the underlying virtualization layer—specifically the KVM architecture we use at CoolVDS—is critical for handling the increased concurrency of these next-gen protocols.

The Architecture of Speed: Multiplexing vs. Pipelining

HTTP/1.1 tried to solve latency with pipelining, but it failed miserably because of the FIFO (First-In-First-Out) nature of TCP queues. SPDY (and the upcoming HTTP/2 draft) changes the game by introducing Binary Framing and Multiplexing.

Instead of multiple TCP connections, we use a single, persistent connection per origin. Inside that connection, multiple streams of data are interleaved. If `style.css` is blocked, `script.js` can still download. This reduces the TCP handshake overhead and the Slow-Start penalty significantly. For users on mobile networks—a massive demographic here in Scandinavia—this reduction in Round Trip Time (RTT) is noticeable immediately.

Pro Tip: Don't bother with domain sharding (e.g., `static1.example.com`, `static2.example.com`) if you switch to SPDY. Sharding was a hack to bypass the 6-connection browser limit. With multiplexing, sharding actually hurts performance by requiring extra DNS lookups and TCP handshakes.

Deploying SPDY 3.1 on Nginx

To get these benefits today (May 2014), you need Nginx compiled with the `ngx_http_spdy_module`. Most default repository versions (like standard yum repositories on CentOS 6) might lag behind. I recommend building Nginx 1.6.0 from source or using a bleeding-edge repo to ensure you have the latest SPDY patches and OpenSSL support, especially post-Heartbleed.

First, verify your current Nginx build arguments:

nginx -V

Look for --with-http_spdy_module and --with-http_ssl_module. If you have them, enabling the protocol is deceptively simple. It requires SSL, which aligns perfectly with the push for privacy from the Norwegian Data Protection Authority.

Nginx Configuration Example

Here is a battle-tested configuration block optimized for a high-traffic e-commerce setup. Note the SSL optimization flags; SPDY requires Next Protocol Negotiation (NPN) support in OpenSSL.

server {
    listen 443 ssl spdy;
    server_name coolvds-demo.no;

    ssl_certificate      /etc/nginx/ssl/coolvds.crt;
    ssl_certificate_key  /etc/nginx/ssl/coolvds.key;

    # Optimization for 2014 security standards
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4';
    ssl_prefer_server_ciphers on;
    
    # SSL Session Cache to reduce handshake overhead
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

    # OCSP Stapling (Crucial for speed)
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    add_header Alternate-Protocol  443:npn-spdy/3.1;

    location / {
        root   /var/www/html;
        index  index.html index.htm;
    }
}

Kernel Tuning for Single-Connection Throughput

Moving to a single TCP connection per client puts more pressure on that specific socket's buffer. If you are running a standard Linux kernel (like 3.2 or 3.13), the defaults are often too conservative for modern gigabit lines. You need to tune the TCP window sizes to allow the full bandwidth potential of your VPS.

Add the following to your /etc/sysctl.conf to optimize for the low-latency networks we enjoy in Northern Europe:

# Increase TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# Increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1

# Recommended for keeping connections alive properly
net.ipv4.tcp_keepalive_time = 1200

Reload with sysctl -p. These settings ensure that when Nginx pushes multiple streams over that single SPDY connection, the kernel doesn't artificially throttle the data flow.

The Hardware Reality: Virtualization Overhead

Software protocols can only do so much. The elephant in the room is I/O wait. When you multiplex dozens of requests for static assets, PHP threads, and database queries simultaneously, your disk I/O pattern becomes highly random. This is where cheap "Cloud" hosting falls apart.

Many providers oversell their storage using shared SANs with spinning disks. When your SPDY protocol demands 50 small files instantly, the physical disk head on a shared SAN simply cannot seek fast enough. Your fancy protocol optimization is wasted waiting for the disk.

This is why at CoolVDS, we refuse to use container-based virtualization (like OpenVZ) where resources are shared ambiguously. We use KVM (Kernel-based Virtual Machine). This gives you a dedicated kernel and, crucially, fair access to our Enterprise SSD storage arrays. In my benchmarks, switching from a standard HDD VPS to a CoolVDS SSD instance reduced the "Time to First Byte" (TTFB) on a Magento store by 400ms. Combined with SPDY, the site felt instantaneous.

Why Compliance Matters in 2014

We are seeing a tightening of data laws across Europe. The Personopplysningsloven in Norway mandates strict security for personal data. By implementing SPDY, you are forced to use HTTPS. This isn't just a performance upgrade; it is a compliance upgrade. It signals to your users—and to auditors—that you take data integrity seriously. With CoolVDS hosting your data in strict adherence to Norwegian jurisdiction, you are building a fortress, not just a website.

Final Verdict

HTTP/2 is coming. The drafts are solidifying, and the IETF is moving fast. But you don't need to wait for the final RFC to get the speed benefits. SPDY 3.1 is the battle-tested bridge to the future. It is supported by Chrome, Firefox, and Opera right now.

Don't let legacy protocols define your infrastructure. Update your Nginx configuration, tune your TCP stack, and ensure your underlying hardware can handle the random I/O of a modern web application.

Ready to see how fast your code really runs? Deploy a KVM SSD instance on CoolVDS today and get root access in under 60 seconds. Speed is a feature—stop compromising on it.