Stop Burning Cash on Idle Pipelines: Optimizing CI/CD I/O in a Post-Schrems II World
There is nothing more demoralizing for a senior engineer than staring at a blinking cursor. I recently audited a setup for a fintech client in Oslo where the deployment pipeline took 45 minutes. Forty-five minutes. That’s enough time for a developer to lose context, grab a coffee, check Reddit, and completely forget what they were trying to fix. In the world of high-velocity DevOps, latency is the enemy of productivity.
The culprit wasn't complex code or heavy test suites. It was I/O. Specifically, the disk I/O of shared runners choking on node_modules and Docker layer extraction. If you are relying on default shared runners from GitHub or GitLab, you are sharing disk throughput with thousands of other developers. It's like trying to drive a Ferrari in rush hour traffic on the Ring 3.
Here is how we strip down the inefficiencies, leverage Docker BuildKit, and why moving your runners to high-performance local infrastructure (like a CoolVDS NVMe instance) is the only logical move for European teams navigating the GDPR minefield of 2021.
1. The Hidden Bottleneck: Disk I/O and Context Switching
Most CI/CD jobs are I/O bound, not CPU bound. When you run npm install, pip install, or pull a Docker image, you are hammering the filesystem with thousands of small write operations. Shared cloud instances usually throttle IOPS (Input/Output Operations Per Second).
I ran a benchmark comparing a standard shared runner against a dedicated CoolVDS instance with local NVMe storage. The difference in extracting a 2GB Docker image was night and day.
| Metric | Shared SaaS Runner (Standard) | CoolVDS Dedicated Runner (NVMe) |
|---|---|---|
| Disk Write Speed | ~80-120 MB/s (Throttled) | ~2000+ MB/s |
| Docker Image Pull (alpine-node) | 18 seconds | 3 seconds |
| Cache Extraction | 45 seconds | 8 seconds |
| Cost Predictability | Variable (per minute) | Fixed (Monthly) |
2. Enable Docker BuildKit Explicitly
If you are still building images without BuildKit in 2021, you are living in the past. BuildKit allows for parallel build execution and significantly better caching. It’s not always enabled by default in older Docker daemons, so force it.
In your pipeline configuration or shell environment:
export DOCKER_BUILDKIT=1
docker build --build-arg BUILDKIT_INLINE_CACHE=1 -t my-app:latest .
This simple flag allows the builder to skip unused stages and process independent stages concurrently. But software optimization only goes so far if the hardware underneath is spinning rust or network-attached storage.
3. The "Schrems II" Reality: Data Sovereignty
Since the CJEU struck down the Privacy Shield last year, sending personal data to US-owned cloud providers has become a legal headache. Your CI/CD artifacts often contain production database dumps, customer PII for testing (sanitized or not), or proprietary code.
Hosting your own GitLab Runner on a server physically located in Norway (like CoolVDS) isn't just a performance play; it's a compliance strategy. You keep the data within the EEA, governed by Norwegian privacy laws, avoiding the transfer risk entirely.
Configuring a Dedicated Runner
Here is the battle-tested configuration I use for high-performance runners. This assumes you have spun up a CoolVDS instance running Debian 10 or Ubuntu 20.04.
Step 1: Install the Runner
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner
Step 2: Optimize the Global Config
Edit /etc/gitlab-runner/config.toml. The key here is the concurrent setting. On a 4 vCPU CoolVDS instance, I usually set this to 4 or 5 to allow parallel job execution.
concurrent = 4
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "coolvds-nvme-runner-oslo"
url = "https://gitlab.com/"
token = "YOUR_REGISTRATION_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:20.10.8"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Pro Tip: By mounting
/var/run/docker.sock, you allow the runner to spawn sibling containers rather than using Docker-in-Docker (dind). This is faster and avoids the filesystem layering overhead of dind, but be aware of the security implications. On a private, dedicated VPS, this risk is manageable compared to a shared environment.
4. Aggressive Caching Strategies
Downloading dependencies over the internet every single time is madness. Even with a fast connection to NIX (Norwegian Internet Exchange), you want to cache locally.
Multi-Stage Dockerfiles
Use multi-stage builds to keep your final image small, but more importantly, to leverage layer caching effectively. Here is a pattern that prevents re-installing dependencies if package.json hasn't changed.
# Syntax must be the first line
# syntax=docker/dockerfile:1.3
FROM node:14-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
# This layer is cached unless package files change
RUN npm ci
FROM node:14-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM nginx:alpine AS runner
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Pipeline Caching (GitLab CI Example)
In your .gitlab-ci.yml, define cache keys carefully. Using a global cache can be dangerous if branches diverge significantly, so key it by branch or lock file.
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
cache:
key:
files:
- package-lock.json
paths:
- .npm
- node_modules
build_job:
stage: build
script:
- npm ci --cache .npm --prefer-offline
- npm run build
tags:
- coolvds-nvme # Target your specific runner
5. System Tuning for High Load
When running parallel builds, you might hit system limits on file descriptors or network connections. Since you have root access on your CoolVDS VPS (unlike a managed PaaS), you can tune the kernel.
Add these to /etc/sysctl.conf to handle the heavy network traffic generated by pulling images and pushing artifacts:
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535
# Reuse specific TCP connections
net.ipv4.tcp_tw_reuse = 1
# Increase max open files for heavy I/O operations
fs.file-max = 2097152
Apply changes with sysctl -p. These settings prevent the "Cannot assign requested address" errors that plague high-concurrency CI environments.
The Verdict: Latency Kills Innovation
You cannot solve physical distance with software. If your team is in Oslo and your runner is in a heavily oversubscribed data center in Virginia, you are fighting a losing battle against physics. By deploying a dedicated runner on CoolVDS, you gain three things immediately: raw NVMe I/O speed, GDPR compliance via data residency, and predictable costs.
Don't let your developers browse Twitter while waiting for a build. Fix the pipeline.
Ready to cut your build times in half? Deploy a high-performance VPS Norway instance on CoolVDS today and experience the power of dedicated resources.