Console Login

Stop Waiting for Builds: Optimizing CI/CD Pipelines on Norwegian Soil

Stop Waiting for Builds: Optimizing CI/CD Pipelines on Norwegian Soil

It’s 15:45 on a Friday. You push a hotfix to master. The pipeline triggers. And you wait. Ten minutes pass. npm install is still running. You stare at the spinner, knowing that if this fails, your weekend is gone. We have all been there, and frankly, it is usually not the code's fault. It is the infrastructure choking on I/O operations.

In 2022, with the maturity of containerization, a slow CI/CD pipeline is an architectural failure. I've spent the last month auditing pipelines for a fintech client in Oslo, and the pattern is always the same: bloated Docker contexts, unoptimized layer caching, and shared runners fighting for disk throughput on oversold clouds.

Here is how we fix it, focusing on raw performance and the specific constraints of hosting in the Nordic region.

1. The Silent Killer: Disk I/O Wait

Most developers throw more CPU cores at a slow pipeline. That is a mistake. CI/CD jobs—especially build steps like compiling assets, docker build, or installing dependencies—are heavily I/O bound. When you use shared runners from the big US cloud providers, you are often sitting on a noisy disk array sharing IOPS with a thousand other tenants.

You can diagnose this immediately on your build server. Don't look at load average alone. Look at %iowait.

vmstat 1 5

If your wa (wait) column is consistently above 10-15%, your CPU is sitting idle waiting for the disk to write data. I recently saw a pipeline taking 25 minutes simply because the node_modules extraction was throttling the disk.

Diagnosing with iostat

To confirm if your storage is the bottleneck, run this during a build:

iostat -xz 1

Check the await (average time for I/O requests to be served) and %util columns. If await spikes over 10ms on a standard SSD, or your utilization hits 100% while writing cache, your "cheap" VPS is costing you hours of engineering time.

2. Docker BuildKit and Layer Caching

If you are still building Docker images the old way in 2022, stop. Docker BuildKit is the modern standard, yet I see DOCKER_BUILDKIT=1 missing from so many pipelines. BuildKit allows for parallel build execution and significantly smarter caching.

But BuildKit can't fix a bad Dockerfile. Order matters. If you copy your source code before installing dependencies, you invalidate the cache on every single commit.

The Wrong Way:

FROM node:16-alpine
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]

The Optimized Way:

# syntax=docker/dockerfile:1.4
FROM node:16-alpine
WORKDIR /app

# Copy only manifests first to leverage cache
COPY package.json package-lock.json ./

# Mount a cache directory for npm to speed up repeated builds
RUN --mount=type=cache,target=/root/.npm \
    npm ci --prefer-offline

COPY . .
CMD ["node", "server.js"]

By using the --mount=type=cache feature (available in stable Docker releases this year), we persist the npm cache between builds even if the layer is rebuilt. This one change dropped our build times from 8 minutes to 90 seconds.

3. Infrastructure: Why "Managed" Runners Fail

Shared runners provided by GitHub or GitLab are convenient, but they are general-purpose. They are not tuned for your workload, and they are certainly not physically located where you need them.

For Norwegian teams, latency to the data center matters. If your artifacts are stored in an S3 bucket in Frankfurt, but your runner is in Virginia, and your deployment target is a VPS in Oslo, you are hair-pinning traffic across the Atlantic twice. This adds latency and potential points of failure.

Pro Tip: Data sovereignty is not just a buzzword; it is a legal requirement under GDPR and the Schrems II ruling. Hosting your CI artifacts and runners on Norwegian soil, protected by Datatilsynet regulations, simplifies compliance significantly. We don't have to worry about US Cloud Act subpoenas when the data never leaves a generic CoolVDS instance in Oslo.

We migrated our runners to self-hosted instances on CoolVDS. Why? Because we get dedicated KVM slices with direct NVMe pass-through. When you unzip a 2GB artifact, the NVMe storage handles the random writes without sweating. No noisy neighbors stealing IOPS.

4. Tuning the GitLab Runner

Deploying a self-hosted runner on a CoolVDS instance gives you granular control. We use the Docker executor. Here is the configuration we use to handle high concurrency without crashing the host.

File: /etc/gitlab-runner/config.toml

concurrent = 4
check_interval = 0

[[runners]]
  name = "coolvds-oslo-runner-01"
  url = "https://gitlab.com/"
  token = "TOKEN_HERE"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    Type = "s3"
    ServerAddress = "minio.internal:9000"
    AccessKey = "minio"
    SecretKey = "minio123"
    BucketName = "runner-cache"
    Insecure = true
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.16"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0

Notice the volumes mount. By mapping the Docker socket, we allow the runner to spawn sibling containers rather than using the slower Docker-in-Docker (dind) approach, though security implications should be considered for public repos. For private internal teams, this is vastly faster.

5. The Network Edge: Latency to NIX

Speed isn't just disk; it's network. If you are deploying to customers in Scandinavia, your build server should be peering at the Norwegian Internet Exchange (NIX). When we push a 500MB container image to our production registry, that transfer needs to be instantaneous.

CoolVDS instances are peered directly in Oslo. We tested transfer speeds from a build runner on CoolVDS to a production staging server (also in Oslo):

Source Destination Throughput Latency
US-East Cloud Runner Oslo VPS 12 MB/s 115ms
Frankfurt Cloud Runner Oslo VPS 45 MB/s 35ms
CoolVDS Oslo Runner Oslo VPS 950 MB/s <1ms

That 950 MB/s throughput means your deploy phase finishes in seconds, not minutes.

Conclusion

Pipeline optimization is a game of inches. You shave seconds with Docker caching, you shave minutes with NVMe I/O, and you ensure stability by controlling the hardware. Don't let a sluggish shared runner dictate your deployment schedule.

If you are serious about reducing your cycle time, stop renting shared CPU cycles. Spin up a dedicated NVMe instance on CoolVDS, install a runner, and watch your build times plummet. Your team (and your Friday evenings) will thank you.