Slash Your CI/CD Build Times: Why Shared Runners are Killing Your Velocity
Waiting for a pipeline to finish isn't just a coffee break; it's a context-switching killer. If your developers are staring at a spinning wheel for 15 minutes every time they push a commit, you are burning money. Iβve seen engineering teams in Oslo lose roughly 20% of their productive week simply waiting for npm install to finish on oversubscribed, choked-out shared runners.
The solution isn't "more microservices." It's brutal hardware efficiency and smarter architecture.
In this guide, we aren't talking about high-level theory. We are going to look at how to architect a self-hosted CI infrastructure that leverages raw NVMe throughput, adheres to strict Norwegian data sovereignty (post-Schrems II), and cuts build times by half.
The Bottleneck is Almost Always I/O
Most DevOps engineers obsess over CPU cores. They throw a 32-core instance at a build server and wonder why it's still slow. Here is the uncomfortable truth: CI/CD is an I/O nightmare.
Think about what happens during a standard build:
- Checking out git repositories (thousands of small files).
- Restoring caches (extracting massive archives).
- Resolving dependencies (tens of thousands of writes for
node_modulesorvendor). - Building/Linking (heavy read/write).
- Pushing Docker images.
On a standard cloud instance with network-attached storage (boot volumes), your IOPS are capped. You hit the ceiling immediately. In a recent migration for a fintech client in Bergen, moving their Jenkins architecture from a general-purpose cloud tier to a CoolVDS instance with local NVMe storage reduced their Java build time from 14 minutes to 4 minutes. No code changes. Just physics.
Configuring the Perfect GitLab Runner
Let's assume you are using GitLab CI (the standard for many European dev shops in 2021). To get maximum performance, you need a custom runner configuration.
1. The Executor: Docker vs. Shell
While shell executors are faster (no container overhead), they are messy. We want isolation. The docker executor is the standard, but it needs tuning. Specifically, you must ensure you are using the overlay2 storage driver, or you will suffer massive latency penalties.
Here is a production-ready config.toml tuned for a high-performance environment:
[[runners]]
name = "coolvds-nvme-runner-01"
url = "https://gitlab.com/"
token = "REDACTED"
executor = "docker"
limit = 4
[runners.custom_build_dir]
[runners.cache]
Type = "s3"
ServerAddress = "minio.internal:9000"
AccessKey = "minioadmin"
SecretKey = "minioadmin"
BucketName = "runner-cache"
Insecure = true
[runners.docker]
tls_verify = false
image = "docker:20.10.8"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
pull_policy = "if-not-present"
Critical Note: Notice the pull_policy = "if-not-present". On a persistent VPS like CoolVDS, this saves massive bandwidth and time by reusing base images already present on the disk.
2. Optimizing Docker Daemon for I/O
The default Docker configuration is "safe," not fast. On your CoolVDS host, you should tune the daemon configuration (/etc/docker/daemon.json) to parallelize downloads and uploads. This saturates your 1Gbps uplink, which is standard on our Norwegian nodes.
{
"storage-driver": "overlay2",
"max-concurrent-uploads": 5,
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Pro Tip: Don't let Docker logs eat your disk space. I've seen servers crash because a runaway container logged 50GB of text. The log-opts above prevent this hard crash scenario.