Why Your CI/CD Pipeline Is Slow (And How to Fix It With NVMe & KVM)
I have watched brilliant developers waste hours of their lives staring at a spinning icon next to a "Pending" build status. It is the modern equivalent of compiling Gentoo on a Pentium 4. We accept it as normal. It is not.
In 2019, if your deployment pipeline takes longer than 10 minutes, you are not just wasting time; you are actively killing your team's momentum. I recently audited a setup for a fintech client in Oslo. Their Jenkins build took 45 minutes. The culprit? Not the code. Not the tests. It was the underlying infrastructure choking on I/O operations.
Most VPS providers oversell their storage throughput. When you run npm install or docker build, you are hammering the disk with thousands of tiny read/write operations. On a standard HDD or a throttled SSD, this queue depth explodes. Your CPU sits idle, waiting for the disk to catch up. This is "I/O Wait," and it is the silent killer of productivity.
The Architecture of Speed: Self-Hosted Runners
Public shared runners (like those provided by GitLab.com or Travis) are convenient, but they are black boxes. You share kernel resources with noisy neighbors. If someone else is mining crypto or compiling Rust on the same physical host, your pipeline suffers.
The solution is a dedicated, self-hosted runner. But not just any server. You need:
- True KVM Virtualization: You need a dedicated kernel for Docker-in-Docker (DinD) stability. OpenVZ containers often fail here due to cgroup limitations.
- NVMe Storage: SATA SSDs are no longer enough for heavy concurrent builds.
- Proximity: If your team is in Norway, why route your artifacts through a data center in Virginia?
Pro Tip: At CoolVDS, we don't oversell resources. Our KVM instances map directly to high-performance hardware. For a CI runner, I recommend starting with our 4GB RAM NVMe plan to keep the entire Docker layer cache in memory.
Step 1: The OS Layer (Ubuntu 18.04 LTS)
Let's assume you have spun up a fresh CoolVDS instance running Ubuntu 18.04. First, we need to fix the Linux kernel defaults. They are too conservative for the heavy file watching required by modern JavaScript frameworks.
Check your current file watch limit:
cat /proc/sys/fs/inotify/max_user_watches
It's likely 8192. That is pathetic. If you have a large React app, your build will crash randomly. Increase it permanently:
echo "fs.inotify.max_user_watches=524288" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Step 2: Optimizing the Docker Daemon
Docker is the engine of modern CI. In 2019, the overlay2 storage driver is the gold standard, but you must ensure it's configured correctly to avoid inode exhaustion. Create or edit /etc/docker/daemon.json:
{
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
},
"dns": ["1.1.1.1", "8.8.8.8"]
}
Restart Docker to apply:
sudo systemctl restart docker
This config does two things: it prevents your CI logs from eating all your disk space (a classic rookie mistake), and it increases the file descriptor limits so your build scripts don't choke on open sockets.
Step 3: Configuring the GitLab Runner
Install the runner. Do not use the repository from `apt-get` directly, it is often outdated. Use the official script:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
Once registered, the magic happens in /etc/gitlab-runner/config.toml. This is where you tell the runner how to behave. We want to use the Docker socket binding method for caching, which is vastly faster than uploading/downloading cache archives for every stage.
[[runners]]
name = "CoolVDS-Oslo-NVMe-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN_HERE"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:19.03.1"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
By mapping /var/run/docker.sock, the container inside the CI uses the host's Docker engine. This means if you pull an image once, it is cached on the host. Subsequent builds are instant.
Note: Only do this on trusted private runners, as it has security implications.
Step 4: The Pipeline Configuration
Now, letβs look at a .gitlab-ci.yml that leverages this speed. We are going to explicitly cache node_modules to prevent re-downloading the internet every time.
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
build_app:
image: node:10-alpine
stage: build
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 hour
docker_build:
stage: build
image: docker:19.03.1
services:
- docker:19.03.1-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
Data Sovereignty and Latency
Why does location matter? If your team is based in Oslo or Bergen, pushing a 2GB Docker image to a server in Frankfurt takes time. Latency adds up. By hosting your runner on a VPS in Norway, you are utilizing the local peering at NIX (Norwegian Internet Exchange). The round-trip time (RTT) drops from 35ms to 2ms.
Furthermore, we are seeing increasing scrutiny from Datatilsynet regarding where data is processed. While GDPR allows EU transfers, keeping your intellectual property and source code on servers physically located in Norway simplifies your compliance posture significantly. You don't have to worry about the US CLOUD Act reach if your data never crosses the Atlantic.
Performance Check
Don't take my word for it. Install `sysstat` and watch your I/O during a build:
sudo apt install sysstat && iostat -mx 1
On a standard cloud VPS, you will see `%util` hit 100% and `await` spike to 50ms+. On CoolVDS NVMe instances, even during a heavy `webpack` compilation, your wait times should stay near zero. That is the difference between a 15-minute build and a 3-minute build.
Final Thoughts
You can optimize your Dockerfiles all day, but you cannot code your way out of bad hardware. If your infrastructure is slow, your team is slow.
Stop accepting "I/O Wait" as a lifestyle. Deploy a dedicated GitLab runner on a CoolVDS NVMe instance today. It takes 55 seconds to provision, and it might just save you hours of waiting every week.