Console Login

Slash Your Build Times: Optimizing CI/CD Pipelines on Nordic Infrastructure

Slash Your Build Times: Optimizing CI/CD Pipelines on Nordic Infrastructure

There is nothing—absolutely nothing—more soul-crushing for a development team than staring at a spinning icon for 45 minutes just to see a red "FAILED" badge because of a timeout. I’ve seen entire sprints derailed because the integration suite took longer to run than the code took to write. In 2019, if your pipeline takes more than 10 minutes, you aren't just wasting compute resources; you are burning developer morale.

The bottleneck usually isn't code complexity. It's infrastructure. Most developers slap a Jenkins container on a cheap, oversold cloud instance and wonder why npm install crawls. Today, we are going to dissect the anatomy of a slow pipeline and fix it using raw compute power, intelligent caching, and the geographical advantage of hosting right here in Norway.

1. The I/O Bottleneck: Why Spinning Rust is Dead

Let’s get technical. CI/CD processes are violently I/O heavy. When you run a build, you are essentially asking the disk to write, read, and delete tens of thousands of tiny files in rapid succession. Think about node_modules or compiling C++ object files.

If your VPS provider is running standard SSDs (or worse, HDDs) over a networked storage layer (SAN), your iowait is going to skyrocket. I recently debugged a Jenkins agent on a generic cloud provider where the CPU usage was 10%, but the load average was 15. Why? Because the CPU was just waiting for the disk to wake up.

The Fix: You need NVMe. Not just SSD, but Non-Volatile Memory Express attached directly to the PCIe bus. At CoolVDS, we standardized on NVMe for our KVM instances because the difference in IOPS is logarithmic, not linear. When your storage queue depth clears instantly, your build time drops.

2. Docker Layer Caching Strategy

If you are rebuilding your entire container from scratch on every commit, you are doing it wrong. Docker layers are read-only templates. In 2019, utilizing the build cache is the single most effective way to speed up deployments.

Here is a classic "Bad" Dockerfile I see in production environments constantly:

FROM node:10-alpine
WORKDIR /app
COPY . .
# This invalidates the cache every time code changes
RUN npm install
CMD ["npm", "start"]

Every time you change a single line of code in index.js, Docker invalidates the COPY . . layer, which forces it to run npm install again. That is minutes of wasted time.

Here is the optimized version we run on our internal CoolVDS build agents:

FROM node:10-alpine
WORKDIR /app

# Copy only the dependency definitions first
COPY package.json package-lock.json ./

# This layer is cached unless dependencies change
RUN npm install --production

# Now copy the code
COPY . .

CMD ["npm", "start"]

By separating the dependency copy from the source code copy, 90% of your commits will skip the installation phase entirely.

3. Kernel Tuning for High-Concurrency Builds

Default Linux distributions like Ubuntu 18.04 LTS are tuned for general-purpose usage, not for the abusive relationship a CI runner has with the kernel. When running parallel tests, you will likely hit file descriptor limits or ephemeral port exhaustion.

On your dedicated runner (ideally a CoolVDS instance with root access), you need to tune /etc/sysctl.conf. I apply this config before I even install Docker:

# /etc/sysctl.conf optimization for CI Runners

# Increase the limit of open file descriptors
fs.file-max = 2097152

# Allow more connections to complete
net.ipv4.tcp_max_syn_backlog = 4096
net.core.somaxconn = 4096

# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1

# Increase port range for massive parallel testing
net.ipv4.ip_local_port_range = 1024 65535

Run sysctl -p to apply. Without this, your high-concurrency integration tests will start failing with cryptic network errors that have nothing to do with your code.

Pro Tip: If you are using GitLab CI, stop using Shared Runners for production builds. You are sharing IOPS with thousands of other users. Deploy a specific GitLab Runner on a dedicated VPS. The isolation ensures consistent build times, which allows you to spot performance regressions in your app, rather than guessing if the build server is just busy.

4. The Geography of Latency: Why Norway Matters

In the DevOps world, we often ignore physics. Light has a speed limit. If your git repository is hosted in GitHub's US-East region, your container registry is in Frankfurt, and your deployment target is in Oslo, your packets are traveling the world.

For Nordic companies, data sovereignty and speed go hand in hand. With the recent GDPR enforcement and the Datatilsynet keeping a close watch, keeping data within Norwegian borders is a compliance necessity. But it's also a performance hack.

Metric US Cloud Provider CoolVDS (Oslo)
Ping to NIX (Oslo) ~90-120ms ~1-3ms
Storage Backend Shared Network Storage Local NVMe
Data Sovereignty Cloud Act (US Jurisdiction) Norwegian Jurisdiction

By hosting your GitLab instance or your Jenkins master on a CoolVDS server in Oslo, you reduce the latency between your code storage, your build artifacts, and your production environment to near zero. We aren't just talking about ping; we are talking about throughput stability.

5. Automating the Runner Deployment

Don't configure servers by hand. It’s 2019. Use Ansible. Here is a snippet of a playbook I use to provision a new build agent on a fresh Debian 9 or Ubuntu 18.04 instance. This installs Docker and the GitLab runner, prepping it for registration.

---
- hosts: build_runners
  become: true
  tasks:
    - name: Install Docker dependencies
      apt:
        name: ["apt-transport-https", "ca-certificates", "curl", "software-properties-common"]
        state: present

    - name: Add Docker GPG key
      apt_key:
        url: https://download.docker.com/linux/ubuntu/gpg
        state: present

    - name: Add GitLab Runner repo
      shell: "curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | bash"

    - name: Install GitLab Runner
      apt:
        name: gitlab-runner
        state: latest

    - name: Ensure Docker service is running
      service:
        name: docker
        state: started
        enabled: yes

Once the runner is up, the most critical step is the registration. When you register the runner, use the docker executor. Do not use the shell executor unless you enjoy cleaning up messy artifacts manually.

gitlab-runner register \
  --non-interactive \
  --url "https://gitlab.com/" \
  --registration-token "PROJECT_REGISTRATION_TOKEN" \
  --executor "docker" \
  --docker-image alpine:latest \
  --description "CoolVDS-NVMe-Runner" \
  --tag-list "high-cpu,nvme,norway"

Conclusion: Stop Renting Noise

Your CI/CD pipeline is the heartbeat of your engineering organization. When it skips a beat, everyone stops. You can spend weeks refactoring your Makefiles and optimizing webpack configurations, but if the underlying metal is slow, you are fighting a losing battle.

We built CoolVDS because we were tired of noisy neighbors and unpredictable I/O on the major hyperscalers. We use KVM virtualization to guarantee that the RAM and CPU you pay for are actually yours. For a build server, that consistency is the difference between a 3-minute deploy and a timeout.

Ready to fix your pipeline? Don't let slow I/O kill your release cadence. Deploy a high-performance NVMe instance on CoolVDS in Oslo today and watch your build times plummet.