Console Login

CI/CD Pipeline Optimization: Why I/O Latency is Killing Your Build Times (And How to Fix It)

Stop Blaming the Code: Your Pipeline is Starving for I/O

I recently audited a deployment pipeline for a fintech client in Oslo. Their complaint was typical: "Our local builds take 45 seconds. The pipeline takes 12 minutes." They blamed Webpack. They blamed the Docker image size. They were wrong. The culprit was iowait.

When you rely on shared runners from the big cloud providers, you are fighting for disk time with thousands of other noisy neighbors. It doesn't matter how clean your code is if the physical disk head (or the throttled IOPS limit on a cheap SSD volume) is maxed out. For a Battle-Hardened DevOps engineer, waiting on infrastructure is the ultimate sin. Here is how we fix it, keeping strict data sovereignty within Norway compliant with the latest interpretation of Schrems II.

The Architecture of Speed: Self-Hosted Runners

The first step to optimizing a CI/CD pipeline is abandoning the shared infrastructure model. You need dedicated resources. Specifically, you need raw NVMe throughput. Modern build processes—especially npm install, cargo build, or compiling Go binaries—are aggressively I/O bound. They generate thousands of small files.

Moving this workload to a dedicated KVM slice immediately removes the "noisy neighbor" variable. But hardware isn't enough; you must tune the engine.

1. Tuning the Docker Daemon for heavy writes

Most default Docker installations are not tuned for the high churn of CI jobs. If you are running your own runner on a CoolVDS instance, you have root access. Use it. We need to ensure the storage driver is overlay2 (standard now, but verify it) and limit logging overhead, which can silently consume IOPS.

Here is the /etc/docker/daemon.json configuration I deploy on every build server:

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 64000,
      "Soft": 64000
    }
  }
}

2. The GitLab Runner Configuration

If you are using GitLab (a favorite among Norwegian dev shops for its self-hosted capabilities), the default configuration is too conservative. We need to adjust the concurrent settings and, critically, how we handle the Docker socket.

While Docker-in-Docker (dind) is the standard advice, binding the socket /var/run/docker.sock is significantly faster because you avoid the overhead of filesystem layering. Warning: Only do this on trusted, private runners where you control the code being executed.

Edit your /etc/gitlab-runner/config.toml:

concurrent = 4
check_interval = 0

[[runners]]
  name = "norway-nvme-runner-01"
  url = "https://gitlab.example.com/"
  token = "YOUR_REGISTRATION_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.12"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
Pro Tip: Notice the check_interval = 0. This forces the runner to poll constantly, reducing the "pending" time for jobs to near zero. This increases CPU usage slightly but slashes latency. Perfect for CoolVDS instances where you have dedicated CPU cycles.

The Network Factor: Latency to Oslo

Physics is stubborn. If your repository is hosted in Northern Europe but your runner is in a generic us-east-1 availability zone, you are paying a latency tax on every git fetch and artifact upload. For a 500MB artifact, that round-trip time destroys your TTM (Time to Market).

Metric US Cloud Runner CoolVDS (Oslo)
Ping to NIX (Norwegian Internet Exchange) ~95ms < 2ms
Disk Type Network Attached Storage (EBS/Ceph) Local NVMe
Data Sovereignty Cloud Act (US Jurisdiction) Norwegian Jurisdiction (GDPR Safe)

System Tuning for High Concurrency

When you run parallel builds, you will hit Linux kernel limits fast. I saw a Jenkins agent crash last week simply because it ran out of file watchers. Before you start your pipelines, apply these sysctl settings to handle the load:

# /etc/sysctl.conf configuration for build servers

# Increase file watchers for massive node_modules trees
fs.inotify.max_user_watches = 524288

# Increase open file descriptors
fs.file-max = 2097152

# Optimize network stack for short-lived connections (API calls)
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

Apply them with sysctl -p. If you skip this, your build will fail randomly with cryptic "ENOSPC" errors, and you will waste hours debugging code that isn't broken.

Compliance is not Optional (Schrems II)

Since the Schrems II ruling, moving personal data (which often inadvertently ends up in build logs or database dumps used in testing) to US-owned cloud providers is a legal minefield. Datatilsynet (The Norwegian Data Protection Authority) is not lenient here.

By hosting your CI/CD runners on CoolVDS, you ensure that the processing of data happens physically within Norway. You aren't just optimizing for speed; you are optimizing for the legal department's peace of mind. That is how you sell a dedicated VPS to a Pragmatic CTO.

Why CoolVDS is the Reference Implementation

You can script all the optimizations above on any server. But if the underlying host is oversold, you lose. We don't play the "burstable" CPU game where your performance falls off a cliff after 30 minutes of compiling. We use KVM virtualization to ensure your RAM and NVMe slices are yours alone.

In a recent benchmark, a maven clean install on a large Java monolith took 8 minutes on a standard digital ocean droplet and 3 minutes 12 seconds on a CoolVDS Performance Plan. That is 5 minutes saved per commit. Multiply that by 10 developers and 20 commits a day. You do the math.

Stop letting slow I/O kill your developer momentum. Deploy a dedicated NVMe runner on CoolVDS today and watch your pipelines turn green before you can switch contexts.