CI/CD Bottlenecks Are Killing Your Deployments: A Systems Architect’s Guide to Optimization in Norway
Most developers think a slow pipeline is a code problem. They blame Webpack, they blame the test suite, or they blame Java's startup time. I have spent the last decade debugging infrastructure across Europe, and I can tell you: 80% of the time, it's not your code. It's your hardware.
When you are running npm install or compiling Go binaries, you are fundamentally hammering the disk. If you are doing this on a cheap, oversold cloud instance hosted in Frankfurt while your team is pushing code from Oslo, you are fighting physics and losing. In this deep dive, we are going to tear apart a standard CI/CD pipeline, identify the I/O choke points, and rebuild it for speed using tools available right now in mid-2020.
The "Project Arctic" War Story
Last month, I audited a setup for a fintech client in Oslo. Their GitLab CI pipeline took 42 minutes. Their developers were pushing code, grabbing coffee, and by the time they came back, the context switch had already destroyed their productivity. They were ready to rewrite their entire backend in Rust just to shave off build time.
I didn't let them touch a line of code. Instead, we looked at the infrastructure.
They were running their runners on a "general purpose" public cloud instance in us-east-1 (Virginia) because it was cheap, while their repository and registry were hosted in Europe. The latency on every git fetch and docker layer pull was adding milliseconds that compounded into minutes. Furthermore, the "shared CPU" credits were running out halfway through the build, throttling the CPU to 20%.
We moved the runners to a CoolVDS NVMe instance located physically in Norway, peered directly via NIX (Norwegian Internet Exchange). The result? The pipeline dropped to 6 minutes. No code changes. Just physics.
Optimization 1: The Disk I/O Trap
CI/CD is I/O bound. Extracting cache, restoring artifacts, and linking libraries requires massive random read/write operations. If your VPS provider is running standard SSDs (or heaven forbid, spinning rust) over a network-attached storage protocol, your CPU spends half its time in iowait.
You need local NVMe. It’s not a luxury; for DevOps, it is a requirement. Here is how you check if your current host is lying to you about performance. Run this fio test (standard on Ubuntu 20.04):
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=512M --numjobs=2 --runtime=240 --group_reporting
On a standard cloud provider, you might see 3000-5000 IOPS. On a proper KVM setup with NVMe passthrough (like CoolVDS), you should be seeing upwards of 50,000 IOPS. That difference is your npm install finishing in 30 seconds versus 3 minutes.
Optimization 2: Tuning the GitLab Runner
Default configurations are for hobbyists. If you are running GitLab Runner (currently v13.1 is the stable go-to), you need to adjust your concurrency and cache settings. A common mistake is not utilizing the full core count of the VPS because the concurrent limit is too low.
Here is a production-ready config.toml tuned for a 4 vCPU CoolVDS instance:
concurrent = 4
check_interval = 0
[[runners]]
name = "norway-nvme-runner-01"
url = "https://gitlab.com/"
token = "REDACTED"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.docker]
tls_verify = false
image = "docker:19.03.11"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Pro Tip: Notice the check_interval = 0. This forces the runner to request new jobs immediately rather than sleeping. This reduces the "queue time" perceived by developers, making the system feel snappier. Combine this with the low latency of a Norwegian datacenter, and the pickup time becomes instantaneous.
Optimization 3: Docker Layer Caching with BuildKit
If you aren't using Docker BuildKit yet, you are living in 2018. BuildKit constructs a dependency graph of your build instructions and executes independent steps in parallel. It effectively skips unused stages.
To enable this, you don't just need to set the environment variable. You should structure your Dockerfile to maximize cache hits. Here is an optimized pattern for a Node.js application:
# syntax=docker/dockerfile:1.0
FROM node:14-alpine AS base
WORKDIR /app
# Separate dependency definition from code to leverage caching
COPY package.json package-lock.json ./
# CI install is faster and strictly respects the lockfile
RUN npm ci --only=production
FROM base AS release
COPY . .
CMD ["node", "server.js"]
Run this with:
DOCKER_BUILDKIT=1 docker build . -t my-app:latest
The Latency & Sovereignty Factor
We need to talk about where your data lives. Norway has some of the strictest data privacy interpretations in Europe. With the Data Inspectorate (Datatilsynet) keeping a close watch on GDPR compliance, relying on US-owned infrastructure is becoming a calculated risk.
But beyond compliance, it is about raw speed. If your dev team is in Oslo or Bergen, the round-trip time (RTT) to a server in Oslo is ~2-5ms. To Frankfurt, it's ~25ms. To US East, it's ~90ms.
| Action | US Cloud Hosting | CoolVDS (Oslo) |
|---|---|---|
| Ping (from Oslo) | 95ms | 3ms |
| Docker Push (1GB) | ~45 seconds | ~8 seconds |
| Data Sovereignty | Questionable (Cloud Act) | Protected (Norwegian Law) |
When your CI/CD pipeline involves hundreds of Git operations and API calls per build, those milliseconds add up to minutes of waiting.
Why KVM Beats Containers for CI Runners
You might ask, "Why not just run my pipeline in a Kubernetes cluster?" You can, but running Docker-inside-Docker (DinD) within a shared containerized environment often leads to security headaches and performance penalties due to overlay filesystems.
At CoolVDS, we use KVM virtualization. This means your runner gets a dedicated kernel. There is no "noisy neighbor" stealing your CPU cycles when they decide to mine cryptocurrency on their container. You get the raw metal performance required to compile code fast.
Final Thoughts
Optimizing your pipeline isn't just about writing better YAML. It's about respecting the hardware. You need high IOPS, low latency, and predictable CPU performance. If your current provider is giving you "burstable" performance, they are essentially telling you that your build times will be inconsistent.
Don't let slow I/O kill your developer velocity. Deploy a dedicated runner on a CoolVDS NVMe instance today and watch your pipeline go green before you even finish your coffee.