Console Login

Optimizing CI/CD Pipelines: Reducing Build Latency on Norwegian Infrastructure

The Hidden Cost of Waiting: Why Your Pipeline Drags

There is nothing more soul-crushing for a developer than pushing a commit and staring at a spinning yellow circle for 25 minutes. If you are running a standard LAMP or MERN stack pipeline, you have likely accepted slow builds as a fact of life. You shouldn't. In my decade of managing infrastructure across the Nordics, I’ve found that 80% of pipeline latency isn't CPU starvation—it's I/O wait.

Let’s get real. CI/CD processes are brutal on disks. npm install, composer update, and Docker layer extraction are essentially thousands of tiny read/write operations. When you run these on standard cloud block storage (which is often network-attached HDD or capped SSD), you are throttled by IOPS, not bandwidth. I recently audited a setup for an Oslo-based fintech startup where their build times dropped from 18 minutes to 3 minutes just by moving from a shared cloud runner to a dedicated NVMe VPS. Here is how we did it.

1. Stop Using Shared Runners for Heavy Lifting

Shared runners provided by GitHub or GitLab are convenient, but they are black boxes. You rarely know what hardware lies beneath, and you are fighting for resources with thousands of other builds. In a post-Schrems II world (July 2020), data residency is also a headache. Sending your proprietary code to a runner in a US jurisdiction region can flag compliance audits here in Norway.

The fix? Self-hosted runners. You control the hardware, the caching, and the data location. By deploying a runner on a CoolVDS instance in Oslo, you keep data within Norwegian borders (satisfying Datatilsynet requirements) and gain raw access to local NVMe storage.

Deploying a GitLab Runner on Ubuntu 20.04

Don't just install it. Optimize it. Here is the battle-tested configuration we use.

# 1. Install Docker first (assume standard Docker CE installation)
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

# 2. Install the Runner
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner

# 3. Register the runner (Interactive)
sudo gitlab-runner register \
  --url "https://gitlab.com/" \
  --registration-token "YOUR_TOKEN_HERE" \
  --executor "docker" \
  --docker-image "docker:20.10.7" \
  --description "coolvds-nvme-runner-oslo" \
  --tag-list "nvme,fast,norway"

2. The Holy Grail: Docker BuildKit

If you are still building Docker images the old way in 2021, you are wasting time. Docker 18.09 introduced BuildKit, but many DevOps engineers still haven't enabled it by default. BuildKit allows for parallel build execution and significantly better caching logic.

To force this on your self-hosted runner, you need to modify the environment variables. In your .gitlab-ci.yml or Jenkins pipeline:

variables:
  DOCKER_BUILDKIT: 1

build_image:
  stage: build
  script:
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
Pro Tip: On your CoolVDS server, edit /etc/docker/daemon.json to enable BuildKit features system-wide and configure the garbage collection to avoid disk fill-up during heavy CI runs.
{
  "features": { "buildkit": true },
  "storage-driver": "overlay2",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

3. Aggressive Caching Strategies

Downloading node_modules or vendor directories for every single commit is bandwidth suicide. While NIX (Norwegian Internet Exchange) is fast, latency adds up. You must cache these directories locally on the runner.

However, relying on the distributed cache (S3/MinIO) can sometimes be slower than downloading fresh if the network is the bottleneck. Since we are using a high-performance VPS, we want to use local filesystem caching.

Here is a refined .gitlab-ci.yml snippet optimized for a persistent runner:

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/
    - .npm/

install_dependencies:
  image: node:14-alpine
  script:
    - npm ci --cache .npm --prefer-offline
  tags:
    - nvme  # Targets our CoolVDS runner

4. The Hardware Reality: NVMe vs. SSD vs. HDD

Why do we insist on NVMe at CoolVDS? It comes down to Queue Depth and IOPS. A standard SATA SSD hits a wall around 500-600 MB/s. NVMe drives, utilizing the PCIe bus, can push 3,500 MB/s or more. In a CI pipeline where you are extracting a 1GB Docker image, that difference is the difference between a 10-second wait and a 2-second wait.

Storage Type Random Read IOPS Throughput CI/CD Impact
Traditional HDD ~100 ~120 MB/s Unusable for modern CI
Standard Cloud SSD ~10,000 ~550 MB/s Acceptable for small projects
CoolVDS NVMe ~400,000+ ~3,000+ MB/s Instant artifact extraction

5. Kernel Tuning for Network Latency

If your runner pushes artifacts to a registry (like Docker Hub or a private Harbor instance), the TCP handshake overhead can add up. Norway has excellent connectivity, but we can squeeze more out of the Linux kernel.

Add these to /etc/sysctl.conf on your build server to optimize the TCP stack for high-throughput bursts, typical in CI/CD artifact uploads:

# Increase TCP window size for high-bandwidth networks
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Enable TCP Fast Open (reduce network latency)
net.ipv4.tcp_fastopen = 3

# Increase the backlog for heavy connection bursts
net.core.netdev_max_backlog = 5000

Apply changes with sudo sysctl -p.

Why Location Matters

Speed is physics. Light travels at a finite speed. If your developers are in Oslo or Bergen, and your build server is in Virginia (US-East), you are dealing with 90ms+ latency per round trip. For interactive SSH sessions during debugging or transferring large artifacts, this is painful. By keeping your build infrastructure in Norway, you reduce latency to single-digit milliseconds.

Furthermore, using a Norwegian provider ensures your intellectual property remains under Norwegian jurisdiction, a critical factor for GDPR compliance following the Schrems II ruling. You don't need a legal team to tell you that data sovereignty is easier when the data never leaves the country.

The Bottom Line: Your developers' time is the most expensive resource you have. Saving $10 a month on a cheap, slow VPS costs you thousands in lost productivity. Switch to an NVMe-backed pipeline.

Ready to cut your build times in half? Deploy a high-performance runner on CoolVDS today and experience the difference raw IOPS makes.