Console Login

Stop Burning CPU Cycles: Optimizing CI/CD Pipelines with Self-Hosted Runners in Norway

Stop Burning CPU Cycles: Optimizing CI/CD Pipelines with Self-Hosted Runners in Norway

If I have to stare at a "Pending" status on a GitLab pipeline for more than 30 seconds, I start looking for something to fix. The "coffee break" excuse for long build times is dead. In 2024, if your pipeline takes 20 minutes to deploy a hotfix, you aren't just losing time; you're bleeding context switching costs and delaying time-to-market.

Most teams start with shared runners provided by SaaS platforms. It works fine for Hello World. But once you introduce integration tests, Docker image building, and complex migrations, shared runners become a choke point. They are often throttled, run on spinning rust (HDD) or slow network storage, and reside in data centers thousands of kilometers away from your target infrastructure.

Here is the reality: IOPS and network latency are the silent killers of CI/CD performance.

This guide breaks down how to architect a high-performance build environment using self-hosted runners on KVM-based VPS infrastructure, specifically tailored for the Nordic region where data sovereignty (Datatilsynet requirements) and latency to local exchanges matter.

The Bottleneck: Why Your Pipeline Crawls

I recently audited a setup for a logistics firm in Oslo. Their deployment pipeline was taking 18 minutes. The bottleneck wasn't the code; it was the infrastructure.

  • Network Latency: Pulling Docker images from US-East to a runner in Frankfurt, then pushing artifacts to a server in Oslo. That round-trip adds up.
  • I/O Wait: npm install or mvn install creates thousands of small files. On shared hosting with noisy neighbors, file system latency destroys build speeds.
  • Cold Caches: Shared runners often start fresh. You pay the penalty of downloading the internet every single time.

The solution isn't "better code." It's raw, dedicated power. By moving the runner to a CoolVDS instance with local NVMe storage and establishing a persistent cache, we dropped that 18-minute build to 4 minutes.

Step 1: The Infrastructure Layer

You need a clean KVM environment. Container-based virtualization (like LXC/OpenVZ) creates headaches for Docker-in-Docker (dind) workflows due to kernel security restrictions. We need a kernel we can control.

Pro Tip: When provisioning your CoolVDS instance, ensure you select a plan with dedicated CPU threads. Compiling code is CPU intensive. If the host steals cycles, your build hangs. Consistency > Burst speed.

Provisioning the Runner

Assuming you are running a standard Debian 12 (Bookworm) environment on your node. First, strip the bloat. We only want the Docker engine and the runner agent.

# Remove legacy packages to prevent conflicts
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add official Docker repo (GPG setup omitted for brevity)
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Tune the daemon for overlay2 performance
cat <

Step 2: Configuring the GitLab Runner for Concurrency

The default configuration for most runners is too conservative. We want to utilize the high I/O throughput of the NVMe drives available on CoolVDS.

Here is an optimized config.toml specifically for a 4-vCPU instance. Note the usage of limit and request_concurrency.

concurrent = 4
check_interval = 0
shutdown_timeout = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "coolvds-norway-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN_HERE"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    Type = "s3"
    ServerAddress = "minio-local:9000"
    AccessKey = "CACHE_ACCESS_KEY"
    SecretKey = "CACHE_SECRET_KEY"
    BucketName = "runner-cache"
    Insecure = true
  [runners.docker]
    tls_verify = false
    image = "docker:24.0.5"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 536870912 # Increase SHM for browser tests (512MB)

Critical Detail: Mounting /var/run/docker.sock allows the container to spawn sibling containers rather than using Docker-in-Docker. This drastically improves performance by removing a layer of filesystem virtualization, but it requires trust in your pipeline scripts.

Step 3: Local Caching with MinIO

Fetching cache from AWS S3 in US-East to a server in Oslo is inefficient. Keep the data close. Deploy a local MinIO instance on the same LAN or even the same host as your runner.

# docker-compose.yml for MinIO Cache
version: '3.8'
services:
  minio:
    image: minio/minio:RELEASE.2024-01-18T22-51-28Z
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: admin
      MINIO_ROOT_PASSWORD: SuperSecretPassword123!
    volumes:
      - ./minio-data:/data
    restart: unless-stopped

By pointing your runner's cache to this local instance, cache extraction becomes instant. You are limited only by the NVMe read speeds, which on CoolVDS typically exceed 3000 MB/s.

Step 4: Database Tuning for Integration Tests

If your CI pipeline spins up a MySQL or PostgreSQL container for testing, standard configs will slow you down. You don't need ACID compliance for a test database that exists for 2 minutes. You need speed.

Map a custom config file to your test database container to disable durability features:

# my-ci.cnf
[mysqld]
# Disable disk sync for speed
innodb_flush_log_at_trx_commit = 0
sync_binlog = 0
innodb_doublewrite = 0

# Memory optimization
innodb_buffer_pool_size = 1G
innodb_log_file_size = 256M
max_connections = 100

This configuration risks data loss on a power outage, but for a CI job? We don't care. If the server dies, we restart the build. This change alone reduced one client's test suite duration by 40%.

The Nordic Context: GDPR and Latency

Beyond speed, there is the legal reality. Under GDPR and the Schrems II ruling, moving personal data across borders is fraught with risk. If your CI/CD pipeline sanitizes production database dumps for staging environments, that process must happen within a compliant jurisdiction.

Running your pipelines on CoolVDS instances in Norway ensures that data never leaves the EEA/Norwegian legal framework. Furthermore, latency to the Norwegian Internet Exchange (NIX) is negligible. If your production servers are in Oslo, your deployment scripts (Ansible/Terraform) execute almost instantly compared to running them from a runner in Virginia.

Trade-offs and Conclusion

Self-hosting isn't free. You own the uptime. If the Docker daemon crashes, you fix it. But the math is simple: compare the cost of engineering hours wasted waiting for builds against the cost of a managed VPS.

For high-performance web serving, we always recommend CoolVDS NVMe instances because the I/O throughput handles the bursty nature of modern CI/CD workloads better than standard SATA SSDs found in budget hosting.

Stop accepting slow builds as a fact of life. Take control of your infrastructure.

Ready to optimize? Spin up a high-performance NVMe instance on CoolVDS today and see your build times drop.