Console Login

CI/CD Pipeline Optimization: Why I/O Latency is Killing Your Build Times (And How to Fix It)

Stop Waiting for Builds: The I/O Bottleneck You Ignored

If your deployment pipeline takes long enough for you to walk to the break room, brew a fresh pot of coffee, and return before the unit tests pass, you have a problem. In the fast-paced development cycles of 2019, time is the only currency that matters. I have seen too many engineering teams in Oslo throw massive CPU resources at a slow build server, only to see a marginal 5% improvement. Why? Because they are optimizing the wrong metric.

The bottleneck usually isn't raw compute power; it is Disk I/O. Whether you are running npm install, compiling C++ artifacts, or building Docker images, your pipeline is thrashing the disk. If you are running your CI/CD runners on standard SATA SSDs—or worse, spinning rust—in a noisy public cloud, your expensive developers are being paid to wait.

The Anatomy of a Slow Build

Let’s look at a recent scenario I debugged for a client in the FinTech sector. They were running a Jenkins cluster on a generic VPS provider in Frankfurt. Their build times for a monolithic Java application had crept up to 45 minutes. They doubled the vCPUs. The build time dropped to 42 minutes. Not exactly a victory.

We ran iostat during the build process. The results were damning.

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          15.40    0.00    3.20   78.50    0.00    2.90

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00    12.00   85.00   45.00  12400.0  4500.0   260.00     4.50   35.00   10.00   80.00   7.50  98.50

Look at that %iowait. It's sitting at 78.5%. The CPU is idling, waiting for the disk to fetch data. The drive utilization (%util) is pegged at 98.5%. This is a classic "noisy neighbor" problem combined with slow storage throughput. In shared hosting environments without strict isolation, another tenant's database backup can grind your CI pipeline to a halt.

The Solution: NVMe and KVM Isolation

We migrated the workload to a CoolVDS instance powered by local NVMe storage. The difference between SATA SSD (roughly 550 MB/s) and NVMe (3000+ MB/s) is not just incremental; it changes how you architect pipelines. Because CoolVDS uses KVM virtualization, we also eliminated the "steal time" often seen in container-based VPS solutions (OpenVZ or LXC) where resources are oversold.

Here is how we configured the new GitLab Runner to leverage this speed. Note the concurrent settings and the use of the Docker executor which relies heavily on I/O for image layering.

Optimizing GitLab Runner Configuration

Inside /etc/gitlab-runner/config.toml, we tweaked the concurrency to match the high I/O throughput capabilities of the NVMe drive:

concurrent = 8
check_interval = 0

[[runners]]
  name = "CoolVDS-NVMe-Runner-Oslo"
  url = "https://gitlab.com/"
  token = "PROJECT_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.1"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
Pro Tip: Always map /var/run/docker.sock if you are doing Docker-in-Docker (DinD) builds to utilize the host's image cache, but be aware of the security implications. On a private CoolVDS instance, this risk is managed, but never do this on shared runners.

Benchmarking the Difference

Don't take my word for it. You should benchmark your current infrastructure. If you are serving customers in Norway, latency to the build server matters less than latency to your production environment, but raw disk speed is universal. Here is the fio command I use to test random write performance—the metric that matters most for compiling code and installing dependencies (lots of small files).

fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 --runtime=60 --group_reporting

On a standard cloud VPS, you might see 3,000 IOPS. On CoolVDS NVMe instances, we consistently hit numbers significantly higher, often saturating the interface capabilities. This translates directly to faster npm install times.

Data Sovereignty and GDPR

Since we are operating in 2019, we cannot ignore the legal landscape. The Datatilsynet (Norwegian Data Protection Authority) is rigorous regarding GDPR compliance. If your CI/CD artifacts contain production database dumps for testing (a bad practice, but common), that data must reside within compliant jurisdictions.

Using a US-based cloud provider for your runners introduces complexity regarding the Privacy Shield frameworks. Hosting your build infrastructure on CoolVDS servers physically located in Europe/Norway ensures that temporary build artifacts never cross borders they shouldn't. Plus, peering at NIX (Norwegian Internet Exchange) ensures that when you push that final Docker image to your production servers in Oslo, the transfer happens over the local backbone, not routed through London or Stockholm.

Docker Layer Caching Strategy

Hardware solves the I/O wait, but software config solves the efficiency. In 2019, multi-stage builds are the standard. Stop shipping your build tools to production.

# STAGE 1: Build
FROM node:10-alpine AS builder
WORKDIR /app
COPY package*.json ./
# This step is I/O intensive - NVMe makes this fly
RUN npm ci
COPY . .
RUN npm run build

# STAGE 2: Production
FROM nginx:1.17-alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

By using npm ci instead of npm install, we get a deterministic build. By leveraging the NVMe storage on CoolVDS, the extraction of the node_modules folder (often containing tens of thousands of small files) is nearly instantaneous.

The Verdict

You can optimize your Dockerfiles and tweak your Webpack configs all day, but if the underlying storage subsystem cannot keep up with the IOPS demand of a modern CI pipeline, you are hitting a hard ceiling. Stability and raw throughput are not optional features; they are requirements.

For DevOps teams needing low-latency performance in the Nordic region without the unpredictable performance of oversold cloud platforms, the architecture requires dedicated resources. CoolVDS offers the NVMe performance profile required to turn a 45-minute build nightmare into a 5-minute coffee break.

Ready to cut your build times in half? Deploy a high-performance NVMe runner on CoolVDS today and stop waiting on "iowait".