Console Login

Stop Renting Slow Pipelines: Optimizing CI/CD with Private NVMe Runners in Norway

Stop Renting Slow Pipelines: Optimizing CI/CD with Private NVMe Runners in Norway

There is nothing more soul-crushing than pushing a critical hotfix and watching the progress bar hang on "Pending" for ten minutes because your shared cloud provider is oversubscribed. I’ve been there. The "Battle-Hardened DevOps" in me knows that if your pipeline takes longer than the coffee break, your feedback loop is broken.

In November 2023, relying on shared runners from the big US cloud giants is a gamble. Sometimes you get a decent instance; other times you get a noisy neighbor that throttles your CPU during `npm install`. For teams operating out of Scandinavia, there is also the latency tax—pushing artifacts to Frankfurt or Ireland adds up.

This guide isn't about the theory of Continuous Integration. It's about raw speed. We are going to look at how moving to private runners on high-performance infrastructure like CoolVDS can cut your build times by half, while keeping Datatilsynet (The Norwegian Data Protection Authority) happy.

The Hidden Bottleneck: I/O Wait

Most developers think CI/CD performance is CPU-bound. They are wrong. Most modern build processes are heavily I/O bound. Whether you are hydrating a `node_modules` black hole, pulling Docker layers, or compiling Rust crates, you are thrashing the disk.

I recently audited a pipeline for a FinTech client in Oslo. They were using standard shared runners. Their build time was 18 minutes. By analyzing the metrics, we saw the runner spent 40% of its time in iowait. The shared SSDs couldn't keep up with the small file writes.

We migrated them to a dedicated CoolVDS instance with NVMe storage. Without changing a single line of code in their .gitlab-ci.yml, build time dropped to 7 minutes. That is the power of raw I/O throughput.

Step 1: Configuring a Private GitLab Runner

Stop letting others control your infrastructure. Hosting your own runner gives you predictable performance and enables aggressive caching. Here is the battle-tested configuration we use for high-load Docker executors.

First, install the runner on your CoolVDS instance (Debian/Ubuntu assumption):

curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner

The magic happens in /etc/gitlab-runner/config.toml. The default settings are too conservative. You need to enable concurrent builds and, crucially, configure the Docker daemon usage.

concurrent = 4
check_interval = 0

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    MaxUploadedArchiveSize = 0
  [runners.docker]
    tls_verify = false
    image = "docker:24.0.5"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 2147483648
Pro Tip: Notice the shm_size = 2147483648 (2GB). The default is often 64MB, which causes headless browsers (Selenium/Puppeteer) to crash randomly during E2E tests. Don't let this bite you.

Step 2: Leveraging Docker BuildKit & Layer Caching

If you are not using Docker BuildKit in 2023, you are living in the past. It handles dependency resolution much more efficiently. However, on ephemeral runners, you lose the cache. On a private CoolVDS runner, the cache persists.

Enable BuildKit in your pipeline variables:

variables:
  DOCKER_BUILDKIT: 1
  DOCKER_DRIVER: overlay2

Here is how you structure your Dockerfile to maximize layer caching. Always copy package definitions before source code.

# BAD PRACTICE
# COPY . .
# RUN npm install

# GOOD PRACTICE
FROM node:18-alpine
WORKDIR /app

# Copy only manifests first
COPY package.json package-lock.json ./

# This layer is cached unless dependencies change
RUN npm ci --quiet

# Now copy source
COPY . .
RUN npm run build

Step 3: Network Latency and Sovereignty

Latency matters. If your production servers are in Norway (connected via NIX), but your CI runner is in Virginia, you are pushing gigabytes of Docker images across the Atlantic for every deploy. This adds minutes to your "Time to Recovery" during an outage.

By placing your runner on a CoolVDS server in Oslo, you achieve single-digit millisecond latency to your staging and production environments. Furthermore, for companies dealing with sensitive data (GDPR), ensuring that temporary build artifacts—which might contain database dumps or sensitive environment variables—never leave Norwegian jurisdiction is a massive compliance win.

Comparison: Build & Push Time (2GB Image)

Runner Location Destination Registry Transfer Time
US East (Shared) Oslo (Private) ~145 seconds
Frankfurt (Cloud) Oslo (Private) ~45 seconds
CoolVDS Oslo Oslo (Private) ~4 seconds

The Trade-Offs

Running private infrastructure isn't free of maintenance. You need to manage the disk space (prune those dangling Docker images) and update the runner agent. If you are a team of two deploying once a week, shared runners are fine. But if you are a serious team deploying five times a day, the maintenance overhead is negligible compared to the hours saved waiting for builds.

We specifically configured CoolVDS KVM instances to handle this workload. Unlike container-based VPS solutions where CPU stealing is rampant, our KVM slice gives you dedicated resources. When your build scripts max out all cores, you get 100% of those cores.

Final Configuration: Auto-Cleanup

Since you are managing the server, don't let Docker eat your disk. Add this cron job to your CoolVDS instance to keep the environment healthy:

# /etc/cron.daily/docker-prune
#!/bin/bash
docker system prune -af --filter "until=24h"

Pipeline efficiency is about removing friction. Low latency, high NVMe I/O, and data sovereignty are the pillars of a robust Norwegian DevOps strategy. Don't let your infrastructure be the reason you miss a deadline.

Ready to speed up your builds? Deploy a High-Performance NVMe Instance on CoolVDS today and see the difference dedicated resources make.