Console Login

Stop the Wait: Optimizing CI/CD Pipelines with Self-Hosted Runners and NVMe

Why Your CI/CD Pipeline Crawls (And How to Fix It with Local Iron)

It is 2019. If your developers are sword-fighting on office chairs while waiting for a build to finish, you are losing money. I recently consulted for a fintech startup in Oslo where a simple frontend deployment took 28 minutes. 28 minutes. In an agile environment, that isn't just a coffee break; it is a systemic failure.

The culprit wasn't their code complexity or their test coverage. It was the infrastructure beneath their CI pipeline. They were relying on shared, oversold runners from a major US cloud provider, choking on I/O operations.

In this analysis, we are going to dissect the physical bottlenecks of Continuous Integration, specifically for Norwegian teams, and look at why self-hosted runners on high-performance infrastructure (like CoolVDS) are the only logical path for serious engineering teams.

The Hidden Bottleneck: It's Not CPU, It's I/O

Most DevOps engineers obsess over CPU cores. While compiling C++ or Rust requires raw compute, the vast majority of web pipelines (Node.js, PHP, Python) are I/O bound. Consider npm install. It is essentially a stress test for your filesystem, generating tens of thousands of tiny files.

On a standard cloud instance using network-attached storage (Ceph or similar block storage over a network), the latency for each small file write accumulates. If your disk latency is 2ms and you write 10,000 files, the math gets ugly fast.

Pro Tip: Always check your disk wait times. If you see high %iowait in top during a build, your storage is the bottleneck, not your code.

This is where CoolVDS differentiates itself. We don't use network storage for our root volumes. We use local NVMe storage. The difference between SATA SSD and NVMe in a CI context is staggering—often reducing node_modules extraction time by 60-70%.

Architecture: The Case for Self-Hosted GitLab Runners

GitLab CI has become the de-facto standard for many European teams, especially given the recent GitHub acquisition concerns. While their shared runners are convenient, they are slow and geographically distant.

By deploying your own GitLab Runner on a VPS in Norway, you gain three critical advantages:

  1. Proximity: Latency to the NIX (Norwegian Internet Exchange) matters when pushing/pulling heavy Docker images.
  2. Resource Isolation: No "noisy neighbors" stealing your CPU cycles during a critical hotfix.
  3. Data Sovereignty: Your source code never leaves the jurisdiction of Datatilsynet (The Norwegian Data Protection Authority).

Configuring the Runner for Performance

Let's get our hands dirty. Assuming you have provisioned a CoolVDS instance with Ubuntu 18.04 LTS, here is how you set up a high-performance runner.

First, install Docker. Do not use the old repositories.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Once Docker is running, verify you are using the overlay2 storage driver. This is crucial for layer caching efficiency.

docker info | grep "Storage Driver"

If it says devicemapper, you need to update your kernel or Docker config immediately. overlay2 is the only acceptable driver for modern pipelines.

The Runner Configuration

Register the runner against your GitLab instance. We will use the Docker executor.

sudo gitlab-runner register \
  --non-interactive \
  --url "https://gitlab.com/" \
  --registration-token "YOUR_TOKEN_HERE" \
  --executor "docker" \
  --docker-image alpine:latest \
  --description "coolvds-nvme-runner-01" \
  --tag-list "nvme,norway,docker"

Now, the critical part. The default configuration is too conservative. We need to edit /etc/gitlab-runner/config.toml to allow concurrent builds and efficient caching.

concurrent = 4
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "alpine:latest"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
    # Use the socket binding carefully for Docker-in-Docker
    # volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]

By setting concurrent = 4, we utilize the multi-core capabilities of CoolVDS instances, allowing frontend and backend tests to run in parallel.

Optimizing the Pipeline: Caching is King

Hardware solves the I/O problem, but configuration solves the network problem. You must cache your dependencies. Downloading the internet for every build is amateur hour.

Here is a robust .gitlab-ci.yml pattern for a Node.js application that leverages local caching:

image: node:10-alpine

stages:
  - build
  - test

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/

variables:
  # Tell npm to use the cached directory
  NPM_CONFIG_CACHE: "$CI_PROJECT_DIR/.npm"

build_app:
  stage: build
  script:
    - npm ci
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour

test_app:
  stage: test
  script:
    - npm run test
  dependencies:
    - build_app

Notice the use of npm ci instead of npm install. In 2019, this is the strictly correct way to build in CI environments as it relies on the lockfile and deletes node_modules before installation, ensuring a clean state without the guesswork.

Network Tuning for Northern Europe

When your server is in Oslo, you have excellent connectivity to the Nordics, but you might need to tune the TCP stack for high-throughput artifacts uploads.

Add these lines to your /etc/sysctl.conf on the CoolVDS host:

net.core.default_qdisc=fq net.ipv4.tcp_congestion_control=bbr

Then apply it:

sudo sysctl -p

Google's BBR congestion control algorithm can significantly improve throughput for large Git objects or Docker image pushes over the public internet.

The Compliance Angle: GDPR & Schrems

We cannot ignore the legal landscape. Since GDPR enforcement began last year, legal departments are increasingly nervous about where code resides. Source code often contains PII (Personally Identifiable Information) in test datasets or hardcoded config files (bad practice, but it happens).

Hosting your CI/CD runner on a US-controlled cloud region exposes you to the Cloud Act. By using CoolVDS, a Norwegian provider, you maintain a cleaner compliance posture. Your data stays in Oslo. It does not replicate to a bucket in Virginia.

Conclusion: Stop Renting Slow Computers

Your engineers are the most expensive asset you have. Saving $20 a month on a cheap VPS while wasting 50 engineering hours a year waiting for builds is bad math.

To fix your pipeline today:

  1. Audit your build times.
  2. Switch to local NVMe storage.
  3. Move off shared runners.

Don't let slow I/O kill your momentum. Deploy a high-performance Runner on CoolVDS today and watch those build times drop.