Stop Watching Progress Bars: Optimizing CI/CD Pipelines on Norwegian Infrastructure
There is a persistent myth in our industry that long build times are an inevitable tax on software development. You push code, the pipeline spins up, and you go grab a coffee. By the time you return, the context switch has already happened. You’ve lost your flow.
If your CI/CD pipeline takes longer than five minutes, you are bleeding money. Not just in server costs, but in developer cognition. In 2019, with the tools we have available—Docker 19.03, GitLab CI, and mature KVM virtualization—there is absolutely no excuse for sluggish integration cycles.
I’ve spent the last month auditing a pipeline for a fintech client in Oslo. Their builds were taking 28 minutes. The culprit wasn't their code complexity; it was their infrastructure. They were running builds on oversold, magnetic-storage VPS instances hosted in Frankfurt, throttled by noisy neighbors and network hops. We moved them to a dedicated NVMe-backed KVM instance in Oslo. The build time dropped to 7 minutes. Here is the technical breakdown of why that happened and how you can replicate it.
The I/O Bottleneck: Why Your Disk Speed Matters More Than CPU
Most developers underestimate the sheer amount of I/O required during a standard npm install, composer install, or a Docker build process. Thousands of small files are written, read, and discarded. On a standard HDD or a cheap SATA SSD shared among 50 users, your IOPS (Input/Output Operations Per Second) hit a ceiling instantly.
When you are running a private GitLab Runner, the single most critical hardware metric is random Read/Write speed. This is where NVMe storage becomes non-negotiable. If you are hosting your runners on CoolVDS, you are already sitting on NVMe arrays. If you aren't, you are likely waiting on disk latency, not CPU processing.
Pro Tip: Check your I/O wait times. Runiostat -x 1during a build. If your%iowaitexceeds 5-10%, your storage is the bottleneck, not your code.
Configuring the Docker Daemon for Performance
Out of the box, Docker is decent, but for a high-churn CI environment, it needs tuning. One major issue we see is the storage driver. By now, most distributions default to overlay2, but if you are migrating legacy systems, ensure you aren't stuck on devicemapper.
Furthermore, CI pipelines generate massive amounts of log data that can choke disk I/O. Configure your /etc/docker/daemon.json to limit this:
{
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}
Reload the daemon with systemctl reload docker. This ensures that a runaway build process doesn't exhaust the host's file descriptors or disk space with gigabytes of logs.
The Network Latency Factor: Why Geography Matters
In Norway, we have the benefit of robust infrastructure, but physics still applies. If your Git repository is hosted on a server in the US, and your build runner is in Oslo, every git fetch and docker layer pull negotiates the Atlantic. This adds latency.
For Norwegian dev teams, the strategy should be data gravity. Keep your code, your runners, and your artifacts close. Hosting your runner on a VPS in Norway (like CoolVDS) ensures minimal latency to local staging environments and compliance with strict data residency requirements enforced by Datatilsynet.
Optimizing GitLab Runner Cache
A common mistake is pulling dependencies from scratch on every pipeline execution. You must utilize caching effectively. Here is a battle-tested .gitlab-ci.yml snippet for a Node.js project that caches `node_modules` based on the lock file:
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
build_job:
stage: build
script:
- npm ci
- npm run build
Using npm ci instead of npm install is crucial in CI environments—it's faster and strictly follows the lock file.
The Architecture: Private Runners vs. Shared Cloud
Why bother managing your own runner on a VPS? Predictability and Security.
| Feature | Shared Cloud Runners | Private Runner (CoolVDS) |
|---|---|---|
| Performance | Variable (Noisy Neighbors) | Consistent, Dedicated Resources |
| Data Privacy | Often opaque/US-based | Full Control (GDPR/Norway) |
| Cost | Pay per minute | Flat monthly rate |
Scaling the Runner
When configuring your /etc/gitlab-runner/config.toml, don't leave the concurrency settings at default. If you have a 4-vCPU CoolVDS instance, you can easily handle multiple parallel jobs depending on memory usage.
concurrent = 4
check_interval = 0
[[runners]]
name = "coolvds-worker-oslo-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:19.03.1"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Note the privileged = true. This is often necessary for Docker-in-Docker (dind) workflows, which allow you to build container images inside your CI jobs. However, this has security implications. If you trust your developers and own the infrastructure (which you do on a VPS), this is acceptable. If you are in a multi-tenant environment, you should use socket binding instead.
Keeping It Clean
Self-hosted runners have one downside: disk accumulation. Docker images pile up. If you don't automate cleanup, your shiny NVMe drive will hit 100% usage at 3 AM on a Saturday. Schedule a cron job to run this simple script weekly:
#!/bin/bash
# Prune dangling images and stopped containers
docker system prune -af --filter "until=168h"
This command removes unused data older than 7 days (168 hours), keeping your pipeline agile without deleting cache layers that are actively being used.
The Verdict
Building software is hard enough without fighting your infrastructure. By shifting your CI/CD workloads to a high-performance VPS in Norway, you gain three things: speed from NVMe storage, compliance from local data residency, and the sanity of predictable build times.
Don't let your deployment queue become the bottleneck of your innovation. Spin up a CoolVDS instance, install a runner, and watch your build times plummet.