Stop Treating Your Build Server Like a Second-Class Citizen
There is nothing that destroys a development team's velocity faster than a 20-minute build pipeline. Iâve seen it a dozen times: a senior engineer pushes a hotfix, stares at a spinning circle for enough time to lose context, gets distracted by Slack, and suddenly that "quick fix" has consumed two hours.
In 2022, if your CI/CD pipeline takes longer than five minutes for a standard microservice, you are burning money. But the problem usually isn't code complexity. It's infrastructure.
Most teams throw their runners on the cheapest available VPS instances, assuming CPU cycles are all that matter. They are wrong. In a recent audit for a fintech client in Oslo, we found that Disk I/O and network latency were responsible for 70% of the build duration. Here is how we fixed it, and how you can architect a pipeline that respects both your developers' time and Norwegian data sovereignty.
1. The I/O Bottleneck: Why NVMe is Non-Negotiable
Modern build processes are aggressively I/O heavy. npm install, pip install, and Docker image layering involve reading and writing thousands of tiny files. Traditional SATA SSDsâand worse, network-attached block storage commonly used by hyperscalersâoften hit an IOPS ceiling long before the CPU maxes out.
If you are running Jenkins or GitLab Runners on shared storage, your neighbors are stealing your IOPS. You need dedicated NVMe throughput.
Here is a quick way to benchmark if your current build server is choking on disk operations. Run this fio command on your runner:
fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1
If your IOPS are under 15,000, your node_modules installation is waiting on the disk, not the internet.
Pro Tip: On CoolVDS KVM instances, we expose the NVMe interface directly to the kernel. We typically see 4k random write speeds exceeding 50,000 IOPS. This difference alone can cut an npm ci step from 4 minutes to 45 seconds.
2. Docker BuildKit and Layer Caching
If you are still building Docker images the old way in 2022, you are doing it wrong. The legacy builder downloads everything every time if a previous layer changes. You must enable Docker BuildKit.
BuildKit allows for concurrent build steps and much smarter caching of mounts. This is critical for compiled languages like Go or Rust, where re-downloading dependencies is a massive waste of bandwidth.
Enable it globally in your environment:
export DOCKER_BUILDKIT=1
Then, update your Dockerfile to use cache mounts. This tells Docker to persist the package manager's cache directory between builds, even if the layer is rebuilt.
# syntax=docker/dockerfile:1
FROM node:16-alpine
WORKDIR /app
# Mount the cache to speed up npm install
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --omit=dev
COPY . .
CMD ["node", "server.js"]
By using --mount=type=cache, the runner doesn't need to re-fetch packages from the registry every single time package.json changesâit only fetches the diff. This reduces network chatter significantly.
3. Data Sovereignty and Latency: The Oslo Factor
Latency is physics. If your repository is hosted in GitHub (likely US or EU-West) but your production servers are in Norway to serve local users, where should your CI runners live?
For Norwegian businesses, the answer is strictly local. With the Schrems II ruling and the aggressive stance of Datatilsynet (The Norwegian Data Protection Authority), moving production data or sensitive test artifacts outside the EEAâor even just to US-owned cloud providersâis a compliance minefield.
Hosting your CI/CD runners on a Norwegian VPS provider like CoolVDS solves two problems:
- Compliance: Data never leaves Norwegian jurisdiction during the build/test phase.
- Deploy Speed: Pushing a 2GB Docker image from a runner in Frankfurt to a cluster in Oslo takes time. Pushing it from a CoolVDS runner in Oslo to a server in the same datacenter is effectively instant.
Configuring GitLab Runner for Local Execution
When setting up a GitLab Runner on a dedicated VPS, avoid the "shell" executor if you want clean environments. Use the "docker" executor. However, to ensure you don't run into "Docker-in-Docker" (DinD) storage driver issues (often caused by the Overlay2 driver on generic kernels), you need a KVM-based VPS.
Here is a robust config.toml optimization for a high-performance runner:
concurrent = 4
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "CoolVDS-Oslo-NVMe-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
Type = "s3"
ServerAddress = "minio.local:9000" # Local caching server for speed
AccessKey = "minio"
SecretKey = "minio123"
BucketName = "runner-cache"
Insecure = true
[runners.docker]
tls_verify = false
image = "docker:20.10.16"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
Note the privileged = true flag. This is required for building Docker images inside Docker. On container-based VPS technologies (like OpenVZ or LXC), this can fail or compromise the host. This is why we insist on KVM virtualization at CoolVDSâit provides the kernel-level isolation necessary to run nested container stacks securely.
4. CPU Stealing: The Silent Killer
In a shared hosting environment, "vCPU" is a marketing term, not a technical guarantee. If a neighbor spins up a crypto miner, your build times fluctuate wildly. This inconsistency makes it impossible to debug whether a slowdown is caused by code or infrastructure.
You need Dedicated CPU threads or a provider that enforces strict fair-share scheduling. When we monitor CPU Steal Time (%st in top) on CoolVDS instances, it sits at 0.0%. If you see anything above 5% on your current runner, move immediately. You are paying for compute you aren't getting.
Summary: The Low-Latency Architecture
To fix your pipeline, stop optimizing your webpack config and start optimizing your infrastructure:
- Storage: Move from HDD/SATA to local NVMe.
- Builder: Switch to Docker BuildKit with cache mounting.
- Location: Place runners geographically close to your deployment target (Oslo).
- Compliance: Keep artifacts within Norwegian borders to satisfy GDPR/Schrems II.
Your DevOps team shouldn't be fighting the infrastructure. Give them raw power and watch the commit-to-deploy time drop.
Need a build environment that respects your time? Spin up a high-performance KVM instance on CoolVDS today. We offer pure NVMe storage and low-latency connectivity to all major Norwegian ISPs.