Console Login

Stop Watching Paint Dry: Optimizing CI/CD Pipelines on Norwegian Infrastructure

Stop Watching Paint Dry: Optimizing CI/CD Pipelines on Norwegian Infrastructure

There is nothing more soul-crushing in this industry than pushing a commit, switching to your terminal, and staring at a progress bar for 25 minutes. We used to call these "coffee break builds." Now? They are velocity killers. If your feedback loop exceeds five minutes, your developers start context-switching, and that is where bugs happen.

I recently audited a setup for a fintech client in Oslo. Their compliance requirements (Schrems II) meant data couldn't leave the EEA, so they were running Jenkins on a localized, budget VPS provider. Their Java builds were taking 18 minutes. The CPU wasn't the bottleneck—it was the disk. They were hitting IOPS limits on shared storage, causing the runner to hang while unpacking Maven dependencies.

We migrated the runner to a CoolVDS NVMe KVM instance. The build time dropped to 4 minutes. Same CPU count. The difference was pure I/O throughput and lack of "noisy neighbors." Here is how you replicate that performance.

1. The Hidden Bottleneck: Disk I/O Wait

Most CI/CD jobs are I/O bound, not CPU bound. `npm install`, `docker pull`, and artifact creation all hammer the disk. If you are running on a standard VPS where storage is throttled or oversold, your CPU is just waiting for data.

Check your runner's I/O wait during a build using iostat (part of the sysstat package on Ubuntu 22.04):

# Install sysstat
sudo apt-get install sysstat

# Watch I/O every 2 seconds
iostat -xz 2

If your %iowait consistently spikes above 10-15%, your storage is the problem. On CoolVDS, we utilize local NVMe arrays passed through via KVM, virtually eliminating this wait time. You get raw throughput, essential for heavy Docker caching operations.

2. Docker Layer Caching Strategy

In 2023, if you aren't using BuildKit, you are living in the past. The legacy builder downloads everything sequentially. BuildKit builds the dependency graph and executes in parallel where possible.

Ensure your pipeline enables BuildKit explicitly:

export DOCKER_BUILDKIT=1
docker build -t my-app:latest .

Furthermore, order matters in your Dockerfile. Don't copy your source code before installing dependencies. If you change one line of code, Docker invalidates the cache for every subsequent step.

Bad Practice:

FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]

Optimized Practice:

FROM node:18-alpine
WORKDIR /app
# Copy package.json first to leverage cache
COPY package.json package-lock.json ./
RUN npm install
# Now copy source code
COPY . .
CMD ["node", "index.js"]

3. Network Latency and Local Mirrors

If your servers are in Oslo but your pipeline is pulling images from a registry in Virginia (us-east-1), you are fighting physics. For Norwegian deployments, latency matters.

Configure your runners to use local mirrors. If you are using Debian or Ubuntu, point your /etc/apt/sources.list to a Norwegian mirror to saturate your 1Gbps uplink.

Pro Tip: CoolVDS infrastructure peers directly at NIX (Norwegian Internet Exchange). This keeps traffic local within Norway, reducing latency to single-digit milliseconds for local data transfers and ensuring data residency compliance under GDPR.

4. Effective Caching in GitLab CI

For GitLab CI users, misconfigured caching is a common pain point. Don't cache `node_modules` across branches unless you lock files strictly. Instead, cache the package manager's cache directory.

Here is a robust configuration for a heavy frontend build:

default:
  image: node:18

variables:
  # Point npm cache to a local directory inside the project
  npm_config_cache: "$CI_PROJECT_DIR/.npm"

cache:
  key:
    files:
      - package-lock.json
  paths:
    - .npm

build_job:
  script:
    - npm ci
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour

This config ensures that `npm ci` (which is faster and cleaner than `install`) hits the local cache. Combined with the high IOPS of our NVMe storage, dependency resolution becomes nearly instantaneous.

5. Infrastructure as Code (IaC) for Runners

Do not treat your build servers like pets. They should be cattle. Use Ansible to provision your build agents so you can scale them horizontally when the team crunches for a release.

Below is a snippet from a playbook we use to provision a Docker-ready runner on Ubuntu 22.04:

---
- name: Provision CI Runner
  hosts: runners
  become: yes
  tasks:
    - name: Install required packages
      apt:
        name:
          - apt-transport-https
          - ca-certificates
          - curl
          - gnupg
          - lsb-release
        state: present
        update_cache: yes

    - name: Add Docker GPG key
      shell: |
        mkdir -p /etc/apt/keyrings
        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg

    - name: Set up repository
      shell: |
        echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
      
    - name: Install Docker Engine
      apt:
        name: docker-ce
        state: latest
        update_cache: yes

Why Infrastructure Matters

You can tweak configurations all day, but you cannot configure your way out of bad hardware. Shared hosting environments often suffer from "steal time"—cycles stolen by other tenants on the hypervisor. This makes build times inconsistent.

At CoolVDS, we prioritize isolation. When you deploy a VPS with us, you aren't fighting for resources. This predictability is critical not just for speed, but for the sanity of your Ops team. Whether you are running Jenkins, GitLab Runners, or Drone CI, the underlying metal dictates your ceiling.

Final Thoughts

Optimization is an iterative process. Start by measuring your I/O wait. If it's high, move to better storage. Then, optimize your Docker layers. Finally, bring the data closer to your computation.

Don't let slow infrastructure be the reason you miss a Friday deploy window. Spin up a high-performance NVMe instance on CoolVDS today and see what a proper build pipeline feels like.