Console Login

Accelerating CI/CD Pipelines: From 20 Minutes to 2 Minutes in the Norwegian Cloud

Accelerating CI/CD Pipelines: From 20 Minutes to 2 Minutes in the Norwegian Cloud

There is nothing—absolutely nothing—that kills developer morale faster than a 25-minute build queue. You push a hotfix, grab a coffee, come back, and the runner is still pulling Docker images. It’s not just annoying; it’s a direct hit to your TTM (Time to Market). I once consulted for a fintech startup in Oslo where the CTO was ready to fire the entire infrastructure team because their npm install took longer than their daily stand-up.

The culprit? It wasn't the code. It was the infrastructure.

Most VPS providers oversell their CPU cycles. When you are compiling Rust or building heavy Java artifacts, you need raw, unadulterated thread power. If your neighbor on the physical host is mining crypto or running a heavy Magento re-index, your build hangs. Here is how we fix it, focusing on the distinct advantages of hosting in Norway.

1. The I/O Bottleneck: Why HDD and SATA SSDs Are Dead to Me

CI/CD is an I/O punisher. Every time a pipeline triggers, you are untarring layers, creating ephemeral files, compiling binaries, and writing artifacts. If your VPS runs on standard SATA SSDs (or heaven forbid, network-attached block storage with low IOPS limits), your CPU is going to spend 40% of its time in iowait.

I distinctly remember debugging a Jenkins agent that was crawling. We ran iostat and saw this:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          12.40    0.00    3.10   45.50    2.00   37.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00    45.00   12.00   88.00   450.00  9200.00   193.00     2.50   25.00   10.00   35.00   8.50  85.00

That 45.50% iowait means the CPU is sitting idle, begging the disk to write data. We migrated that workload to a CoolVDS instance backed by local NVMe arrays. The build time dropped from 14 minutes to 3 minutes instantly. No code changes.

The Fix: Optimize the Docker Storage Driver

If you are running Docker-in-Docker (dind) on your runners, ensure you use the overlay2 driver. Older installations on CentOS 7 might default to devicemapper, which is a performance graveyard.

Check your driver:

docker info | grep 'Storage Driver'

If it doesn't say overlay2, update your /etc/docker/daemon.json immediately:

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}

2. Network Latency: The NIX Advantage

In 2022, your pipeline is likely pulling gigabytes of dependencies. Maven packages, Docker base images, pip wheels. If your server is in Frankfurt and your data residency requirements (thanks, GDPR) force you to route traffic through strict firewalls back to Oslo, you are adding latency to thousands of small HTTP requests.

By hosting your runners and your private registry in Norway, you leverage the NIX (Norwegian Internet Exchange). Latency within the country is often sub-2ms. This matters when you are doing thousands of metadata lookups during a build.

Pro Tip: Don't rely on public Docker Hub for base images. It throttles you. Set up a local pull-through cache using the official Registry image on a separate, small VPS. It keeps your traffic local and fast.

Here is a snippet to deploy a simple registry mirror on your CoolVDS instance:

version: '3'
services:
  registry-mirror:
    image: registry:2
    ports:
      - "5000:5000"
    environment:
      REGISTRY_PROXY_REMOTEURL: https://registry-1.docker.io
    volumes:
      - ./data:/var/lib/registry

3. The "Steal Time" Killer: Choosing KVM over Containers

Many budget hosting providers use OpenVZ or LXC. These are containers, not virtual machines. They share the host kernel. If another customer on the node gets hit by a DDoS, your iptables chain locks up. If they fork-bomb, your PID limit is exhausted.

We strictly use KVM (Kernel-based Virtual Machine) at CoolVDS. This provides hard hardware virtualization. Your RAM is allocated, your CPU cycles are reserved. In a CI/CD context, consistency is more important than raw burst speed. You need to know that a build takes 5 minutes, every single time.

To verify you aren't suffering from "noisy neighbors" stealing your CPU cycles, watch the %steal column in top. Anything above 0.0 is unacceptable for a production CI environment.

4. Pipeline Caching Strategies

Hardware solves a lot, but bad config ruins everything. A common mistake in GitLab CI is failing to cache the node_modules or .m2 directories effectively. You should not be downloading the internet on every commit.

Here is a battle-tested .gitlab-ci.yml snippet for a Node.js project that leverages distributed caching:

stages:
  - build

build_app:
  stage: build
  image: node:16-alpine
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - node_modules/
  script:
    - npm ci --prefer-offline
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour

Note the --prefer-offline flag. It tells npm to use the cache if the data is present, skipping network checks entirely.

5. Data Sovereignty and Compliance

Since the Schrems II ruling in 2020, moving personal data to US-controlled clouds is a legal minefield. If your CI/CD pipeline processes test databases containing production-like data (even if obfuscated), that data is being processed on the runner.

If that runner is hosted on AWS or Azure, you have a theoretical transfer issue under GDPR. Hosting on a Norwegian provider like CoolVDS, which operates under Norwegian jurisdiction and Datatilsynet guidelines, simplifies your compliance posture significantly. You know exactly where the physical drive sits.

Summary: The Speed Trinity

To fix a slow pipeline, you need three things:

  • Disk Speed: NVMe reduces I/O wait during compilation.
  • Isolation: KVM prevents neighbor processes from stealing your CPU.
  • Proximity: Local connectivity in Norway reduces latency for dependency resolution.

Don't let slow I/O kill your SEO or your developer's joy. Deploy a high-performance Jenkins or GitLab runner on a CoolVDS NVMe instance today. You can have it spun up in roughly 55 seconds—less time than it takes to complain about your current build time.