Stop Waiting for Builds: Optimizing CI/CD Pipelines with Local NVMe Runners
There is nothing more soul-crushing than pushing a hotfix and staring at a spinning circle for 20 minutes, only for the build to fail because of a timeout. If your CI/CD pipeline takes longer than the time it takes to brew a fresh cup of coffee, you have a problem. In the Nordic market, where developer time is expensive, inefficient pipelines aren't just annoying—they are a financial leak.
Most teams default to SaaS runners provided by GitHub or GitLab. It’s convenient. It’s also slow. You are sharing CPU cycles and, more critically, disk I/O with thousands of other developers. When your npm install or cargo build hits the disk, it crawls. I’ve seen pipelines for Magento stores and heavy Java microservices drop from 15 minutes to under 3 minutes simply by moving from shared SaaS runners to dedicated, self-hosted NVMe infrastructure.
The I/O Bottleneck: Why Your Runner Matters
CI/CD is fundamentally an I/O-bound operation. You are pulling images, extracting layers, compiling binaries, writing artifacts, and pushing images back to a registry. Shared cloud runners often throttle IOPS. If you want speed, you need raw disk throughput.
This is where the infrastructure choice becomes architectural, not just operational. We run our heavy build agents on CoolVDS instances because the underlying NVMe storage provides the random Read/Write speeds necessary to handle massive node_modules folders without choking. It’s simple physics: higher IOPS equals faster dependency resolution.
Configuration Strategy: Docker BuildKit with Cache Mounts
If you are still writing standard Dockerfiles in 2025, you are doing it wrong. Docker BuildKit has been stable for years, yet I still audit setups that don't utilize cache mounts. This prevents Docker from re-downloading dependencies if the lockfile hasn't changed, even if you don't have a persistent layer cache.
Here is how you structure a Dockerfile for a Node.js application to leverage aggressive caching:
# syntax=docker/dockerfile:1.4
FROM node:20-alpine AS deps
WORKDIR /app
# Mount the cache to a specific target to speed up yarn install
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=yarn.lock,target=yarn.lock \
--mount=type=cache,target=/root/.yarn \
yarn install --frozen-lockfile
COPY . .
RUN yarn build
This simple change prevents the package manager from fetching the internet every time you change a line of code. However, cache mounts require the runner to have persistent local storage or a very fast connection to a cache backend. A transient container on a slow network renders this useless.
Latency and Data Sovereignty: The Norwegian Context
If your target infrastructure is in Oslo, why is your CI runner in Virginia? Deploying from a US-based runner to a server in Norway introduces unnecessary latency and potential packet loss during the transfer of large artifacts (like 2GB+ Docker images).
Furthermore, we must talk about compliance. In the post-Schrems II era, Datatilsynet (The Norwegian Data Protection Authority) does not look kindly on personal data leaving the EEA. If your build artifacts contain production database dumps for testing, or if your debug logs accidentally capture PII, processing that on a US-controlled cloud runner is a GDPR risk.
Pro Tip: Keep your pipeline local. Hosting your GitLab Runner or GitHub Actions Runner on a VPS in Norway ensures that your code and artifacts never leave the jurisdiction. It also means you are pushing to your production servers over a local link, often leveraging peering at NIX (Norwegian Internet Exchange).
Implementing a High-Performance Self-Hosted Runner
Let's set up a GitLab Runner on a CoolVDS instance running Debian 12 (Bookworm). This setup assumes you prioritize speed over isolation, using the shell executor for raw performance, or the Docker executor with mapped sockets for flexibility.
1. Optimize the Host System
Before installing the runner, tune the Linux kernel for high throughput network and file operations. Add this to /etc/sysctl.conf:
# Increase system file descriptor limits
fs.file-max = 2097152
# Optimize TCP stack for low latency
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_mtu_probing = 1
# Increase queue length for high packet rates
net.core.netdev_max_backlog = 16384
2. Configure the Runner for Concurrency
Edit your /etc/gitlab-runner/config.toml. The key here is the concurrent setting and the volume mapping. We map the Docker socket to allow the container to spawn sibling containers (Docker-in-Docker), but mapped to the host's daemon for layer caching benefits.
concurrent = 4
check_interval = 0
[[runners]]
name = "coolvds-nvme-runner-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:24.0.5"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Warning: Mapping docker.sock implies security trade-offs. Ensure your CI environment is isolated from your production workloads. We usually dedicate a specific VDS for this purpose.
The Economic Argument: TCO of Self-Hosted
Many CTOs argue that SaaS is cheaper because there is no maintenance. This is false economy. Calculate the cost of 5 senior developers waiting 15 minutes extra per merge request. If you merge 10 times a day, that is 12.5 developer-hours lost per week.
| Feature | SaaS Runner (Free/Standard) | CoolVDS Self-Hosted (NVMe) |
|---|---|---|
| vCPU / RAM | 2 vCPU / 7GB (Shared) | 4+ vCPU / 8GB+ (Dedicated) |
| Storage I/O | Network Attached (Variable) | Local NVMe (Consistent High Speed) |
| Caching | Upload/Download required | Local persistence (Instant) |
| Location | Random (US/EU) | Oslo, Norway |
Automated Cleanup Strategy
The downside of persistent, self-hosted runners is disk usage. Docker images accumulate fast. You need a cron job to keep your CoolVDS instance healthy. Do not rely on manual pruning.
#!/bin/bash
# /usr/local/bin/cleanup-docker.sh
# Remove unused containers
docker container prune -f --filter "until=24h"
# Remove unused images (dangling)
docker image prune -f
# Remove build cache older than 48 hours to prevent disk fill
docker builder prune -f --filter "until=48h"
# Check disk usage and alert if > 80% (Simple integration)
USAGE=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
if [ "$USAGE" -gt 80 ]; then
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"⚠️ Build Runner Disk Usage High: '$USAGE'%"}' \
YOUR_SLACK_WEBHOOK_URL
fi
Add this to your crontab to run daily at 04:00 AM.
Conclusion
Speed is a feature. If your CI/CD pipeline is slow, you are shipping fewer features and fixing fewer bugs. By moving from constrained SaaS runners to high-performance local infrastructure, you gain control over the hardware, the network path, and the security of your data.
For Norwegian teams, the choice is clear. You need low latency, GDPR compliance, and raw NVMe performance. Don't let slow I/O kill your momentum. Spin up a high-performance instance on CoolVDS today and watch your build times drop.