Console Login

PWA Performance: Why Your Infrastructure is Bottlenecking Your Service Worker

The "App Shell" Fallacy

Everyone is talking about Progressive Web Apps right now. With iOS 11.3 finally shipping Service Worker support back in March, the floodgates are open. Clients want that native feel without the App Store friction. So, you build a sleek App Shell in React or Vue, you configure your sw.js to cache assets, and you deploy.

Then you audit it with Chrome 66's DevTools audits (Lighthouse 3.0), and the performance score is garbage.

Why? Because you focused entirely on the client-side JavaScript and ignored the metal it runs on. A Service Worker cannot cache what it hasn't downloaded yet. If your Time To First Byte (TTFB) is hovering around 600ms because you are on a noisy shared host, your PWA is dead on arrival. The "Progressive" part of PWA implies it works for everyone, but on a flaky 3G connection in rural Norway, latency is the only metric that matters.

The Protocol: HTTP/2 is Not Optional

In 2018, serving a PWA over HTTP/1.1 is malpractice. PWAs typically involve loading dozens of small JavaScript chunks and JSON payloads. Under HTTP/1.1, you hit head-of-line blocking. Browsers limit simultaneous connections (usually 6). Your assets queue up, and the user stares at a white screen.

You need HTTP/2 Multiplexing. This allows the browser to request everything at once over a single TCP connection. But simply enabling it isn't enough; you need to tune the TLS stack. The handshake overhead can be significant.

Here is a production-ready Nginx configuration block we use on CoolVDS instances to force strong ciphers and enable H2:

server {
    listen 443 ssl http2;
    server_name pwa.example.no;

    # SSL Configuration for A+ Rating (Qualys SSL Labs)
    ssl_certificate /etc/letsencrypt/live/pwa.example.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/pwa.example.no/privkey.pem;
    
    ssl_protocols TLSv1.2; # TLS 1.3 is still draft/experimental, stick to 1.2 for now
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
    ssl_prefer_server_ciphers on;

    # HSTS (Crucial for PWA security requirements)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # Optimization for TTFB
    ssl_buffer_size 4k;
}

Notice the ssl_buffer_size 4k; directive. By default, Nginx uses 16k buffers. This is fine for large file downloads, but for a PWA app shell, it increases latency because the browser has to wait for the full buffer to fill before processing. Dropping this to 4k reduces TTFB significantly.

Compression: Gzip vs. Brotli

Stop using Gzip for static assets. Brotli (developed by Google) has been supported in Nginx since roughly 2016, but few people compile it in. It offers 20-26% better compression ratios than Gzip for HTML, CSS, and JS. For a PWA where every kilobyte counts toward the "Interactive" metric, this is free speed.

If you are running a standard yum/apt install, you might not have the module. You might need to compile Nginx from source or use a repo that supports it. On our CoolVDS KVM templates, we often recommend compiling Nginx to strip out unused modules anyway.

# Inside nginx.conf
brotli on;
brotli_comp_level 6;
brotli_types text/xml image/svg+xml application/x-font-ttf image/vnd.microsoft.icon application/x-font-opentype application/json font/eot application/vnd.ms-fontobject application/javascript font/otf application/xml application/xhtml+xml text/javascript application/x-javascript text/plain application/x-font-truetype application/xml+rss image/x-icon font/opentype text/css image/x-win-bitmap;

Storage I/O: The NVMe Difference

Service Workers rely heavily on caching strategies (Stale-While-Revalidate, Cache-First, etc.). However, your database queries for dynamic content (API calls) must be instant. If you are running MySQL or PostgreSQL on standard SATA SSDs (or heaven forbid, spinning HDDs), your I/O wait times will spike during high traffic.

We recently migrated a Magento-based PWA from a legacy VPS provider to a CoolVDS instance with NVMe storage. The database `SELECT` queries dropped from an average of 0.2s to 0.04s. NVMe interfaces connect directly to the CPU via PCIe, bypassing the SATA controller bottleneck. When you have 50 concurrent users hitting your API, that queue depth matters.

Pro Tip: Check your disk scheduler. On a virtualized Linux environment, you generally want `noop` or `deadline` rather than `cfq`.
cat /sys/block/sda/queue/scheduler

Geography and Legal Compliance (GDPR)

It has been less than a month since GDPR came into full effect (May 25th). The panic is real. If you are hosting your database in a cheap US cloud region, you are navigating a minefield regarding data sovereignty. Moving your infrastructure to Norway (or the EEA) isn't just about latency—though being physically closer to NIX (Norwegian Internet Exchange) helps—it's about compliance.

CoolVDS data centers in Oslo allow you to keep user data within the legal framework required by the Datatilsynet (Norwegian Data Protection Authority). Latency to Oslo from major European hubs is often under 30ms.

Benchmark: TTFB Comparison

SetupStorageProtocolAvg TTFB (Oslo)
Shared HostingSATA SSDHTTP/1.1450ms+
Budget CloudSATA SSDHTTP/2120ms
CoolVDSNVMeHTTP/2 + Brotli28ms

The Deployment Strategy

Don't rely on manual FTP uploads. In 2018, you should be using a CI/CD pipeline, even if it's just a simple Git hook. Here is a simple post-receive hook script to deploy your PWA build artifacts to your Nginx root automatically:

#!/bin/bash
TARGET="/var/www/pwa"
GIT_DIR="/var/repo/site.git"
BRANCH="master"

while read oldrev newrev ref
do
    if [[ $ref =~ .*/$BRANCH$ ]];
    then
        echo "Ref $ref received. Deploying ${BRANCH}..."
        git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH
        
        # Install deps and build (assuming Node 8/10)
        cd $TARGET
        npm install --production
        npm run build
        
        # Reload Nginx to clear potential caches
        systemctl reload nginx
    else
        echo "Ref $ref received. Doing nothing: only the ${BRANCH} branch may be deployed on this server."
    fi
done

Conclusion

A PWA is only as fast as the server delivering it. You can write the cleanest React code in the world, but if your handshake takes 300ms and your database is I/O bound, your user experience will suffer. Do not let legacy infrastructure kill your project.

Switch to a provider that guarantees NVMe storage and KVM isolation. Spin up a CoolVDS instance in Oslo today and see what single-digit latency feels like.