Console Login

CI/CD Bottlenecks Are Killing Your Velocity: Optimizing Pipelines on Norwegian Infrastructure

CI/CD Bottlenecks Are Killing Your Velocity: Optimizing Pipelines on Norwegian Infrastructure

There is a specific kind of silence that falls over a development office around 2:00 PM. It’s not focus. It’s waiting. It’s the sound of expensive engineers staring at a spinning icon on a Jenkins dashboard or a GitLab pipeline trace, waiting for docker build to finish.

I have audited infrastructure for top-tier Oslo tech firms where the burn rate wasn't in the cloud bill—it was in the idle time of developers waiting 20 minutes for a deployment that should take three. In 2019, if your pipeline takes longer than the time it takes to fetch a coffee, you are doing it wrong.

The culprit is rarely the complexity of your code. It is almost always the infrastructure underneath your runners. Let's look at why your pipelines are stalling and how to architect a solution that keeps your dev team moving.

The Hidden Killer: I/O Wait and Steal Time

Most development teams treat CI runners as second-class citizens, dumping them on the cheapest available VPS instances. This is a fatal error. CI/CD processes are inherently I/O heavy. Extracting artifacts, unzipping node_modules, and layering Docker images punish the disk subsystem.

On a standard shared hosting environment using OpenVZ or older virtualization, you are fighting for disk time with every other noisy neighbor on the host node. If your provider is using spinning rust (HDD) or even cheap SATA SSDs, your I/O wait times will skyrocket.

Run this on your current CI runner during a build:

iostat -x 1 10

If you see your %iowait consistently peaking above 5-10%, your CPU is sitting idle while the disk tries to catch up. I've seen builds where npm install took 400 seconds on a cheap VPS and 45 seconds on a proper NVMe-backed instance.

The CoolVDS Standard

This is why at CoolVDS, we standardized on NVMe storage for all our KVM instances. We don't upsell it as a premium feature; it's the baseline. When we route traffic through NIX (Norwegian Internet Exchange) in Oslo, we ensure that the bottleneck is never the hardware.

Optimizing the Docker Layer

If you are running Docker-in-Docker (dind) or binding the socket, the storage driver matters. In 2019, if you aren't using overlay2, you are living in the past. Older drivers like devicemapper are performance graveyards.

Ensure your /etc/docker/daemon.json is explicitly configured for performance:

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Restart Docker and verify with docker info | grep Storage. This seemingly small change can reduce image build times by 30% by handling layer merging more efficiently.

Caching: The Difference Between 2 Minutes and 20

Downloading dependencies over the internet for every single build is insanity. Even with the solid connectivity we have here in Northern Europe, latency adds up. You must implement aggressive caching.

For a GitLab CI setup, do not just cache the path; use a lock file as the key. Here is a battle-tested configuration snippet for a Node.js project that prevents cache corruption and ensures speed:

cache:
  key:
    files:
      - package-lock.json
  paths:
    - .npm/

install_dependencies:
  stage: build
  script:
    - npm ci --cache .npm --prefer-offline
  artifacts:
    paths:
      - node_modules/
Pro Tip: Notice the usage of npm ci instead of npm install. It strictly installs from the lockfile and is significantly faster for automated environments. Combined with the local .npm cache, this turns a network-bound task into a purely local disk operation. This is where CoolVDS's high IOPS performance shines.

Latency and Data Sovereignty

Latency isn't just about disk speed; it's about network topology. If your team is in Oslo or Bergen, but your CI runners are in a data center in Frankfurt or Virginia, you are adding 30-100ms of latency to every git fetch and registry push.

Hosting your CI infrastructure locally in Norway isn't just about speed; it's about compliance. With GDPR in full effect since last year and Datatilsynet (The Norwegian Data Protection Authority) watching closely, keeping your source code and potentially sensitive test databases within Norwegian borders mitigates legal risk.

Using a provider like CoolVDS ensures your data stays under Norwegian jurisdiction while benefitting from the low latency of the local grid.

Tuning the Kernel for High Load

CI runners often spawn thousands of short-lived processes and network connections. The default Linux kernel settings are often too conservative for this