Console Login

Optimizing CI/CD Throughput: Why I/O Latency Kills Pipelines (And How to Fix It)

Stop Treating Your Build Server Like a Garbage Can

I watched a deployment pipeline take 45 minutes yesterday. The team called it "normal." They used the downtime to grab coffee, play ping pong, or context-switch into oblivion. I looked at the logs. The CPU wasn't pinned. The RAM had headroom. But the iowait was sitting at 40%.

This is the silent killer of DevOps velocity in late 2019. Everyone is obsessed with Kubernetes complexity or microservices architecture, but they run their CI runners on cheap, oversold VPS instances with spinning rust (HDD) or choked SATA SSDs. When you run npm install, pip install, or compile C++ objects, you are hammering the filesystem with thousands of tiny write operations.

If you are deploying in Europe, specifically Norway, and you aren't optimizing for I/O and latency, you are throwing engineering hours into a furnace. Here is how we fix it.

The Anatomy of a Slow Pipeline

Most CI/CD bottlenecks stem from two specific areas: Disk I/O contention and Network Latency during artifact transfer. Let's look at the disk first.

In a shared hosting environment, "Noisy Neighbors" are real. If the tenant next to you decides to re-index their Elasticsearch cluster, your IOPS (Input/Output Operations Per Second) tank. This is why we rely on KVM (Kernel-based Virtual Machine) at CoolVDS. Unlike OpenVZ, KVM provides stricter isolation. But hardware matters more.

Optimization 1: The Docker Storage Driver

If you are running Jenkins or GitLab runners inside Docker (which you should be, it's almost 2020), the storage driver dictates how layers are written. In older kernels, devicemapper caused massive overhead. Today, if you aren't using overlay2, you are doing it wrong.

Check your current driver:

docker info | grep Storage

If it doesn't say overlay2, you need to update your /etc/docker/daemon.json. This removes the heavy lock contention found in older drivers.

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
Pro Tip: Don't let Docker logs consume your I/O. The default logging driver doesn't rotate logs effectively. The configuration above limits log files to 10MB, saving your disk throughput for actual build artifacts.

Optimization 2: Ramdisk for Temporary Artifacts

Why write temporary build files to disk if they are going to be deleted in 5 minutes? RAM is orders of magnitude faster than even the best NVMe storage.

In your CI configuration, mount /tmp or your workspace directory as a tmpfs. This offloads the heavy read/write cycles of intermediate object files directly to memory.

Here is how you do it in a GitLab Runner config.toml:

[[runners]]
  name = "CoolVDS-Oslo-Runner-01"
  url = "https://gitlab.com/"
  token = "TOKEN"
  executor = "docker"
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.5"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
    # The magic line below:
    services_tmpfs = {"/var/lib/mysql" = "rw,noexec,nosuid,size=512m"}

For a standard Jenkins setup on a Linux VPS, you can manually mount a build directory into RAM:

# Add to /etc/fstab
tmpfs   /var/lib/jenkins/workspace   tmpfs   rw,size=4g,uid=1000,gid=1000   0  0

# Mount immediately
mount /var/lib/jenkins/workspace

Warning: Ensure your VPS has enough RAM. If you are on a small 2GB instance, this will crash your build. We recommend at least 8GB RAM instances for this technique.

The Hardware Factor: NVMe vs. SSD

In 2019, many providers still market SATA SSDs as "high speed." They aren't. SATA creates a bottleneck at roughly 600 MB/s. NVMe (Non-Volatile Memory express) interfaces directly with the PCIe bus, pushing speeds over 3000 MB/s.

When your CI pipeline is restoring a node_modules cache or pulling a heavy Docker image, that difference is the gap between a 2-minute build and a 10-minute build. CoolVDS utilizes pure NVMe storage arrays because we know that I/O wait time is dead money.

Feature SATA SSD VPS CoolVDS NVMe
Throughput ~550 MB/s ~3200 MB/s
Latency ~0.2 ms ~0.03 ms
Parallel Writes Queue Depth limited (AHCI) 64k Queues (NVMe)

Local Nuances: The Norwegian Context

If your development team is in Oslo or Bergen, hosting your CI runners in Frankfurt or London adds unnecessary latency. Every Git fetch, every artifact push travels over the fiber.

By hosting in Norway, you reduce round-trip time (RTT) to the NIX (Norwegian Internet Exchange). Furthermore, with Datatilsynet enforcing GDPR strictly, keeping your source code and build artifacts (which often contain hardcoded dev secrets or customer data dumps) within Norwegian borders simplifies your compliance posture significantly.

Implementing a Caching Proxy

Stop downloading the same libraries from Maven Central or NPM Registry a thousand times a day. Set up a local nexus or artifactory mirror on the same LAN as your runners.

A simple Nginx reverse proxy configuration can also act as a cache for specific static assets if you want a lightweight solution:

proxy_cache_path /data/nginx/cache keys_zone=my_cache:10m levels=1:2 inactive=60m max_size=10g;

server {
    location / {
        proxy_pass http://registry.npmjs.org;
        proxy_set_header Host registry.npmjs.org;
        proxy_cache my_cache;
        proxy_cache_valid 200 60m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    }
}

Conclusion

Optimizing CI/CD is an exercise in removing friction. You remove software friction with better Docker drivers and caching strategies. You remove hardware friction by abandoning SATA for NVMe. And you remove network friction by hosting close to your team.

Don't let a slow disk dictate your deployment schedule. Spin up a CoolVDS NVMe instance today, mount your workspace in RAM, and see what it feels like to wait seconds, not minutes.