Fixing the 20-Minute Build Queue: Self-Hosted CI/CD in Post-Schrems II Norway
It’s 15:45 on a Friday. You push a hotfix. The tests should take three minutes. Instead, you're staring at a pending status because the shared runner pool on your SaaS git provider is choked. We have all been there. Waiting on a queue creates context switching, and context switching kills code quality. If your pipeline takes longer than a coffee break, it is broken.
But performance isn't the only headache in 2021. Since the CJEU's Schrems II ruling last year, pushing code containing customer data or production secrets through US-owned infrastructure is a compliance minefield. If you are operating in Norway, Datatilsynet is watching. The solution is not more expensive SaaS tiers. It’s bringing the compute home.
The I/O Bottleneck in Modern Builds
Most CI/CD pipelines are I/O bound, not CPU bound. Think about it. npm install, docker build, and artifact uploads all hammer the disk. Shared runners often run on standard SSDs (or worse, network-attached storage with capped IOPS). This is why a build that takes 2 minutes on your local machine takes 10 minutes in the cloud.
To fix this, we need raw NVMe throughput and isolation. This is where a dedicated KVM instance outperforms a containerized shared runner.
Pro Tip: When benchmarking VPS providers for CI workloads, ignore the CPU core count for a moment. Look at disk write speeds. Ifddreports under 400 MB/s, yournode_modulesextraction will crawl.
Step 1: The Infrastructure
We use CoolVDS for this reference architecture because they provide local NVMe storage directly attached to the KVM hypervisor. No network latency on disk writes. Plus, the datacenter is in Oslo, peering directly at NIX (Norwegian Internet Exchange). If your dev team is in Scandinavia, the latency reduction is palpable.
Provision a server with at least 4 vCPUs and 8GB RAM. Ubuntu 20.04 LTS is the standard here.
First, optimize the kernel for high churn network connections (common in heavy build processes):
sysctl -w net.ipv4.tcp_tw_reuse=1
Step 2: Leveraging BuildKit and Layer Caching
Stop sending the full build context every time. Ensure you are using Docker BuildKit. It has been stable for a while now and drastically improves concurrency.
Set this env var in your CI variables:
export DOCKER_BUILDKIT=1
Here is a proper multi-stage Dockerfile that leverages caching layers effectively. If you aren't doing this, you are wasting bandwidth.
# syntax=docker/dockerfile:1.3
FROM node:14-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
# Mount a cache volume to speed up npm ci
RUN --mount=type=cache,target=/root/.npm \
npm ci --prefer-offline
FROM node:14-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM nginx:1.21-alpine AS runner
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The --mount=type=cache directive is a game changer. It persists the npm cache between builds on the same runner.
Step 3: Deploying a Self-Hosted GitLab Runner
Let's assume you are using GitLab. The logic applies to GitHub Actions runners or Jenkins agents too. We want a runner that executes Docker commands inside the VPS but keeps the cache local.
First, install the runner repository:
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
Now, register the runner. We use the docker executor. This ensures a clean environment for every job, but since the Docker daemon is on our NVMe-backed CoolVDS instance, image pulls are instant if the base layers exist locally.
sudo gitlab-runner register \
--non-interactive \
--url "https://gitlab.com/" \
--registration-token "YOUR_PROJECT_TOKEN" \
--executor "docker" \
--docker-image "docker:20.10.8" \
--description "coolvds-nvme-runner-oslo" \
--tag-list "nvme,fast,norway" \
--run-untagged="true" \
--locked="false" \
--access-level="not_protected"
Once registered, we need to tweak the /etc/gitlab-runner/config.toml to allow the Docker socket binding. This is controversial but necessary for Docker-in-Docker (DinD) cache sharing without privilege escalation issues in some contexts. However, a safer bet for 2021 is mapping the socket.
[[runners]]
name = "coolvds-nvme-runner-oslo"
url = "https://gitlab.com/"
token = "TOKEN_HASH"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:20.10.8"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Notice privileged = true and the volume mount. This allows the container to spawn sibling containers using the host's daemon. The result? You don't pull the node:14-alpine image 50 times a day. It is there. Ready.
The Clean-Up Script
Self-hosted runners have one downside: disk usage. Since we are caching everything to maximize speed, your storage will fill up. A shared runner vanishes after use; a VPS persists. You need a cron job to keep the hygiene.
Create a script /opt/scripts/docker-cleanup.sh:
#!/bin/bash
# Prune dangling images and stopped containers older than 24h
docker system prune -a -f --filter "until=24h"
# Specific cleanup for build cache if space is critical
# docker builder prune -f --filter "until=48h"
echo "Docker cleanup completed at $(date)" >> /var/log/docker-cleanup.log
Add it to crontab to run daily at 04:00 AM Norway time:
0 4 * * * /bin/bash /opt/scripts/docker-cleanup.sh
Comparison: Shared vs. CoolVDS Dedicated
| Feature | SaaS Shared Runner | CoolVDS Self-Hosted |
|---|---|---|
| Build Start Time | Variable (Queue dependent) | Instant |
| I/O Speed | Capped / Networked | Local NVMe |
| Data Sovereignty | Often US-controlled | Strictly Norway/Europe |
| Cost | Per minute (expensive at scale) | Flat monthly fee |
The Compliance Edge
For Norwegian entities, the "Data Processor" agreement is vital. By hosting your runner on a Norwegian VPS, you ensure that the temporary artifacts—which often contain database dumps for integration testing or proprietary source code—never physically leave the region. In a post-Schrems II world, this isn't just a technical preference; it's often a legal requirement for public sector or fintech projects.
Control the metal, control the data.
Latency matters too. If your git repository is hosted on a managed instance in Europe, and your runner is in Oslo, the clone times are negligible. We have measured round-trip times (RTT) from CoolVDS to major European hubs at under 15ms.
Stop Waiting
A fast CI/CD pipeline is the single best investment for developer happiness. You don't need a complex Kubernetes cluster for this. A single, robust VPS with high-speed storage can handle the load of a 20-person team easily.
Don't let slow I/O kill your release cadence. Deploy a high-performance runner on CoolVDS today and watch those green checkmarks fly.