Console Login

CI/CD Pipelines Are Dying From I/O Starvation: A DevOps Post-Mortem

CI/CD Pipelines Are Dying From I/O Starvation: A DevOps Post-Mortem

I recently watched a senior backend engineer stare at a Jenkins console for 42 minutes. He wasn't compiling the Linux kernel. He was waiting for a standard React frontend build and a few PHP unit tests to finish. The pipeline didn't fail; it just crawled.

If you've spent any time in the trenches of DevOps, you know this pain. We obsess over code efficiency, writing O(n log n) algorithms, yet we deploy on infrastructure that treats I/O like a scarce resource from 1999. In Norway, where developer salaries are among the highest in Europe, paying a developer to watch a spinning progress bar is financial suicide.

Let's cut the fluff. The problem isn't your webpack config (usually). The problem is that your CI runners are gasping for air on cheap, noisy-neighbor storage. Here is how we fix it, utilizing strategies compatible with the infrastructure landscape of late 2023.

The Hidden Killer: I/O Wait

When you run npm install, composer install, or pull Docker images, you are hammering the disk. On a standard VPS shared with 500 other users, your iowait spikes. The CPU sits idle, waiting for the disk controller to return data.

Pro Tip: Run iostat -x 1 on your runner during a build. If %iowait exceeds 5-10%, your storage solution is the bottleneck, not your code.

I recall a project for a client in Stavanger. Their builds took 25 minutes on a generic cloud provider. We moved the self-hosted GitLab Runners to a CoolVDS instance with dedicated NVMe storage. Without changing a single line of code, the build time dropped to 8 minutes. Why? Because NVMe handles the random read/write patterns of package managers orders of magnitude faster than SATA SSDs or network-attached block storage.

Optimizing the Docker Layer Cache

Before we talk more about hardware, we must ensure your software isn't fighting against you. Docker BuildKit (standard in 2023) is powerful, but only if you respect layer order.

Common mistake: copying source code before installing dependencies.

The "Slow" Way

FROM node:18-alpine
WORKDIR /app
COPY . .
# Every time you change a single file in your code, this layer invalidates
RUN npm install
RUN npm run build

The Optimized Way

By copying only the manifest files first, Docker caches the heavy npm install layer unless dependencies actually change.

FROM node:18-alpine
WORKDIR /app

# Copy only dependency definitions first
COPY package.json package-lock.json ./

# This layer is cached even if you change src/index.js
RUN npm ci --quiet

COPY . .
RUN npm run build

Configuring High-Performance Runners

If you are using GitLab CI, the default runner configuration is often too conservative. You need to adjust the concurrent limits and ensure you are using the Docker executor with overlay2 storage drivers.

Here is a production-ready config.toml snippet used on a CoolVDS KVM instance. Note the limit and volume mapping for Docker socket binding (DooD - Docker outside of Docker), which is often faster than DinD (Docker in Docker) for simple builds, though security implications apply.

concurrent = 4
check_interval = 0

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "PROJECT_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "docker:24.0.5"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    # Mount the host docker socket for performance
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0

On a KVM-based VPS like CoolVDS, you have full kernel control. This allows you to tune the vm.dirty_ratio to handle the bursty nature of CI writes better than on container-based VPS solutions (OpenVZ/LXC), where kernel parameters are locked.

The Geography of Latency

Latency is the silent killer of deployment pipelines. If your development team is in Oslo and your servers are in Frankfurt or Virginia, you are paying a tax on every git push and every artifact upload.

Action Server in US-East Server in Oslo (CoolVDS) Impact
Ping RTT ~95ms ~2-5ms SSH terminal lag
500MB Artifact Upload ~45 seconds ~6 seconds Slower deploy pipelines
Data Sovereignty Patriot Act Risk GDPR/Schrems II Compliant Legal compliance

For Norwegian businesses, adhering to Datatilsynet guidelines is not optional. Keeping your CI artifacts—which often contain intellectual property and potentially PII in test databases—within Norwegian borders is a massive compliance advantage. CoolVDS infrastructure sits directly on the local backbone, meaning your data doesn't needlessly cross borders.

Scripting the Deploy

Don't rely on heavy plugins if you don't have to. A simple, robust rsync script is often the most reliable deployment method for static content or PHP applications. It is atomic (mostly) and bandwidth-efficient.

#!/bin/bash
set -e

# Variables
REMOTE_USER="deploy"
REMOTE_HOST="192.168.1.50" # Your CoolVDS IP
REMOTE_DIR="/var/www/production"

echo "Deploying to $REMOTE_HOST..."

# Build
npm run build

# Sync
# -a: archive mode
# -v: verbose
# -z: compress
# --delete: remove files on remote that don't exist locally
rsync -avz --delete ./dist/ $REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR

# Post-deploy hooks
ssh $REMOTE_USER@$REMOTE_HOST "cd $REMOTE_DIR && php artisan migrate --force"

echo "Deployment complete."

Why Infrastructure Matters in 2023

We are seeing a shift. The initial excitement of "serverless everything" is settling into a pragmatic understanding that raw compute and fast disk still rule the world of build pipelines. Serverless functions have cold starts; Dedicated KVM instances do not.

When you control the metal (or the virtual metal), you control the outcome. CoolVDS provides that control with high-frequency CPUs and local NVMe storage that eats I/O heavy tasks for breakfast. If you are tired of timeouts and sluggish pipelines, stop blaming the code. Look at the iron it's running on.

Stop letting slow I/O kill your developer velocity. Spin up a high-performance runner on CoolVDS today and watch your build times plummet.