Stop Bleeding Money on Slow Builds: Architecting High-Performance CI/CD Pipelines in 2019
There is nothing—absolutely nothing—more soul-crushing for a DevOps engineer than staring at a yellow "Pending" status for 15 minutes, only to have the build fail because of a timeout. If you are relying on shared runners from the big cloud providers, you are paying a hidden tax. It’s not just the monthly bill; it’s the context switching cost every time your developers lose focus waiting for npm install to finish.
I recently audited a deployment pipeline for a fintech client in Oslo. Their time-to-production was lagging significantly. The culprit wasn't their code; it was their infrastructure. They were compiling heavy Java artifacts on burstable cloud instances that throttled CPU the moment the load got real. The fix wasn't a complex refactor. It was moving the build agents to dedicated infrastructure with high I/O throughput.
The Bottleneck is Almost Always I/O
In 2019, we are obsessing over Kubernetes and microservices, but we often forget the iron underneath. CI/CD processes are notoriously I/O heavy. Unpacking Docker images, resolving dependencies, linking binaries—these operations thrash the disk.
When you use a standard VPS or a shared runner, you are often fighting for IOPS with the noisy neighbor next door who is running a crypto miner or a poorly optimized Magento cron job. To fix this, we need dedicated NVMe storage and a Linux kernel tuned for throughput.
Pro Tip: Never use burstable CPU instances (like T-series) for CI runners. Compilation is a sustained 100% CPU activity. You will run out of credits halfway through the build, and your speed will drop to 10% of baseline. Always choose instances with dedicated cores, like the Performance tiers on CoolVDS.
Architecture: The Self-Hosted GitLab Runner
For this walkthrough, we are going to set up a high-velocity GitLab Runner on a CoolVDS instance located in Oslo. Why Oslo? Because if your team is in Norway, latency matters. Pushing a 2GB Docker image to a registry in Frankfurt takes significantly longer than pushing it to a local datacenter connected to NIX (Norwegian Internet Exchange).
1. The Infrastructure Setup
First, we provision a server. I recommend at least 4 vCPUs and 8GB RAM for a serious build agent. We are using Ubuntu 18.04 LTS.
Before installing the runner, we need to optimize the kernel for Docker network performance. Standard TCP settings are too conservative for the thousands of ephemeral connections a CI pipeline creates.
nano /etc/sysctl.conf
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65000
# Reuse connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1
# Increase max backlog for high connection bursts
net.core.somaxconn = 4096
# Maximize Docker bridge traffic speed
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Apply these changes with sysctl -p.
2. Installing the Runner
We will use the official GitLab repository. Don't use the default package manager versions; they are often outdated.
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-runner
3. The Critical Configuration: Docker Socket vs. DIND
Here is where 90% of setups fail on performance. You have two choices for building Docker images inside your CI:
- Docker-in-Docker (dind): Secure, but slow and complex regarding storage drivers.
- Docker Socket Binding: Maps the host's Docker socket to the container. Blazing fast caching, but requires trust in your pipeline code.
For a private build farm where you control the code, Socket Binding is superior. It allows you to reuse the host's layer cache, meaning if you didn't change your package.json, the build step is virtually instant.
Register the runner with this specific configuration:
[[runners]]
name = "CoolVDS-NVMe-Runner-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN_HERE"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:19.03.1"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
Note the volumes line. We are mapping the host's socket. This allows the container to spawn sibling containers on the host, leveraging the raw NVMe speed of the CoolVDS instance directly, rather than going through a virtualized filesystem overlay.
Caching Strategy: The Speed Multiplier
Even with NVMe, downloading dependencies over the internet is slow. We need a local cache. Since we are operating in a Norwegian context, complying with GDPR is easier if data stays local. We can set up a MinIO instance on the same private network as our runner to act as a localized S3-compatible cache.
However, for immediate gains, we can configure local volume caching in our .gitlab-ci.yml:
stages:
- build
- test
variables:
DOCKER_DRIVER: overlay2
build_app:
stage: build
image: node:10-alpine
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
script:
- npm ci
- npm run build
tags:
- coolvds-nvme
Using npm ci instead of npm install is crucial in 2019 pipelines—it's deterministic and faster, but it relies heavily on I/O. On a standard HDD VPS, npm ci for a large React app can take 4 minutes. On a CoolVDS NVMe instance, I've clocked it at 45 seconds.
Maintenance: Don't Let Docker Eat Your Disk
A high-frequency CI server will fill up disk space with dangling images and volumes faster than you think. You don't want to wake up at 3 AM because the disk is full. Set up a cron job to prune the system nightly.
Create a script /opt/scripts/docker-prune.sh:
#!/bin/bash
# Prune dangling images, stopped containers, and unused networks
# Keep build cache tailored to last 24h
docker system prune -af --filter "until=24h"
docker volume prune -f
Add it to crontab:
0 3 * * * /opt/scripts/docker-prune.sh >> /var/log/docker-prune.log 2>&1
Data Sovereignty and Compliance
For Norwegian businesses, the location of your CI artifacts matters. Source code often contains trade secrets or, inadvertently, PII (Personally Identifiable Information) in test data. Under GDPR, you are the controller of this data.
Hosting your build runners on US-controlled public clouds introduces complexity regarding data transfer mechanisms. By utilizing a provider like CoolVDS with datacenters physically located in the Nordics, you simplify your compliance posture. The data goes from your git repo to a server in Oslo, is built, and deployed to your production environment (likely also in Oslo). It never crosses the Atlantic.
Comparison: Shared Cloud vs. Dedicated VPS
| Feature | Public Cloud Shared Runner | Dedicated CoolVDS Runner |
|---|---|---|
| Startup Time | Variable (1-5 mins) | Instant (< 2 seconds) |
| I/O Performance | Throttled / Networked Storage | Local NVMe Pass-through |
| Cost | Free tier (slow) or per-minute | Flat monthly rate |
| Data Location | Opaque (Usually EU-West) | Norway (Oslo) |
Final Thoughts
Optimization is not about magic; it is about removing friction. In a CI/CD pipeline, friction is I/O latency and resource contention. By moving your build agents off shared infrastructure and onto dedicated, high-performance NVMe instances, you aren't just saving minutes on a build. You are keeping your developers in the "flow state."
If you are tired of watching the spinner and want to see what a raw Linux kernel on proper hardware can do for your deployment times, it's time to upgrade your infrastructure.
Ready to cut your build times in half? Spin up a high-performance instance on CoolVDS today and build without limits.