Console Login

Stop Watching Progress Bars: Optimizing CI/CD Pipelines on Norwegian Infrastructure (2021 Edition)

The Hidden Cost of "Waiting for Runner"

It is 2021, and yet I still see senior developers playing table tennis while their pipelines run. We joke about "compiling," but the reality is grim. A 15-minute build time isn't just 15 minutes lost; it is a context-switch penalty that destroys focus for the next hour. If you are running Jenkins or GitLab CI on budget infrastructure, you are bleeding money.

I recently audited a setup for a fintech client in Oslo. Their complaint? "Builds take 40 minutes." The culprit wasn't their code—it was their infrastructure. They were running heavy Java builds inside Docker containers on cheap, spinning-disk VPS instances hosted in Frankfurt. The I/O wait (iowait) was consistently hitting 40%. They were bottlenecked by disk speed, not CPU.

The I/O Bottleneck: Why NVMe Matters in 2021

CI/CD is fundamentally an I/O-intensive operation. You are pulling images, extracting layers, compiling binaries, writing artifacts, and pushing results. If your underlying storage is SATA SSD (or heaven forbid, HDD), your CPU is spending half its life waiting for data.

This is where the hardware architecture of your hosting provider becomes critical. At CoolVDS, we enforced an all-NVMe standard because we saw this pattern repeatedly. NVMe drives offer significantly higher IOPS (Input/Output Operations Per Second) and lower latency than SATA SSDs.

Pro Tip: Check your disk latency. If you are seeing average wait times over 10ms during a build, upgrade your storage. Use iostat -x 1 to monitor this in real-time.

Optimizing Docker for Speed

Hardware helps, but bad configuration kills. In 2021, if you aren't using multi-stage builds to keep your images thin, you are doing it wrong. But let's go deeper. Let's talk about the Docker Storage Driver.

On many older Linux kernels (CentOS 7 era), the default might fall back to `devicemapper`. Ensure you are using `overlay2`. It is faster and more efficient with inode usage.

# Check your storage driver docker info | grep "Storage Driver" # Output should be: Storage Driver: overlay2

If you are self-hosting GitLab Runners (which you should be, for reasons I'll discuss below), you can mount the Docker socket to avoid the "Docker-in-Docker" (dind) performance penalty, although this comes with security trade-offs. For a trusted environment, socket binding is drastically faster.

Compliance: The Schrems II Reality

We need to talk about the elephant in the room: Schrems II. Since the ruling last July, relying on US-based cloud providers for CI/CD artifacts containing PII (Personally Identifiable Information) is legally risky for Norwegian companies. Datatilsynet (The Norwegian Data Protection Authority) has been clear about the implications of data transfers.

By hosting your runners on a VPS in Norway, you solve two problems at once:

  1. Compliance: Data stays within the EEA/Norway jurisdiction, simplifying GDPR adherence.
  2. Latency: If your dev team is in Oslo, pushing a 2GB Docker image to a server in Oslo is significantly faster than pushing it to US-East-1. Physics still applies.

Configuration Deep Dive: GitLab Runner

Let's look at a config.toml that actually works for high-concurrency environments. The default settings are too conservative. If you have a CoolVDS instance with 8 vCPUs, do not limit yourself to 1 job at a time.


concurrent = 4
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "CoolVDS-Oslo-Runner-01"
  url = "https://gitlab.com/"
  token = "PROJECT_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.5"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
    # Use the host's overlay2 driver for speed
    volume_driver = "overlay2"

Notice check_interval = 0. This forces the runner to constantly poll for jobs. It increases CPU usage slightly on the runner, but it reduces the "pending" time for developers. On a dedicated VPS, this is acceptable; on shared hosting, you might get throttled (another reason we use KVM isolation at CoolVDS—your CPU cycles are yours).

The Cache Strategy

Downloading dependencies (NPM modules, Maven artifacts, Go packages) takes time. Stop doing it every single run. You need a persistent cache. If you are using a single VPS for your runner, you can use local volume mapping. If you have a distributed fleet, you need S3-compatible object storage (like MinIO hosted locally).

For a local setup on a fast NVMe VPS, map the cache directory directly:


# In your .gitlab-ci.yml
cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/
    - .m2/repository/
    - .cache/go-build/

Combined with the high read/write speeds of NVMe storage, this turns a 5-minute `npm install` into a 20-second checksum verification.

Architecture: KVM vs. Containers

Why do we recommend running your CI infrastructure on KVM-based VPS (like CoolVDS) rather than inside a container on a massive shared cluster? Isolation.

In a containerized "shared kernel" environment (like OpenVZ or basic container hosting), a neighbor compiling the Linux kernel can drain the entropy and dentry cache of the host, affecting your filesystem performance. With KVM, you have a reserved kernel and dedicated resource allocation. For CI/CD, where disk burst is required, "noisy neighbors" are the enemy of consistent build times.

Comparison: Build Time for Spring Boot App

Infrastructure Storage Type Avg Build Time Consistency
Standard Cloud VPS SATA SSD (Networked) 12m 45s Low (± 4m)
CoolVDS High Perf Local NVMe 4m 12s High (± 20s)

Final Thoughts

Optimizing CI/CD is about removing friction. You need to attack latency at every level: network (host in Norway/Europe), disk (use NVMe), and software (optimize caching and drivers). Don't let your infrastructure be the reason your team ships features slowly.

If you are ready to stop waiting for pipelines, spin up a high-performance, KVM-backed instance. Test the I/O difference yourself.