Console Login

Stop Watching Progress Bars: CI/CD Optimization for High-Velocity Teams

The "It Works on My Machine" Excuse Ends Here: Optimizing CI/CD in 2018

It is May 2018. If your developers are spending more time playing foosball than deploying code because "the build is running," you have a problem. In the Nordic tech scene, we pride ourselves on efficiency. Yet, I constantly see teams in Oslo and Bergen running sophisticated microservices on CI/CD pipelines that rely on legacy spinning rust (HDD) or oversold cloud instances.

Latency isn't just network time; it's the time lost context-switching when a build takes 20 minutes instead of two. We are going to fix that. We will look at why disk I/O is the silent killer of CI performance, how to implement Docker multi-stage builds correctly, and why the new GDPR enforcement (as of last week) means you should probably host your build artifacts locally in Norway.

The I/O Bottleneck: Why CPU is Overrated

Most CTOs throw more CPU cores at a slow Jenkins or GitLab CI server. This is usually a waste of budget. Analyze your build process. Whether you are running npm install, compiling Go binaries, or building Docker images, the operation is heavily dependent on random read/write speeds.

The Reality Check: A 4-core server with NVMe storage will outperform a 16-core server with standard SSDs for most CI tasks. Access times matter.

Pro Tip: Check your disk latency. If your iowait is consistently above 5% during builds, your storage solution is choking your pipeline. On CoolVDS, we enforce strict KVM isolation and use enterprise NVMe arrays to ensure your disk I/O throughput is dedicated, not shared with neighbors running crypto miners.

Docker Multi-Stage Builds: The 2018 Standard

Since Docker 17.05, we have had multi-stage builds. If you are still shipping build tools (compilers, headers, SDKs) in your production containers, stop. It bloats the image and increases the attack surface.

Here is how a proper, optimized Dockerfile should look for a Go application today:

# Stage 1: The Builder
FROM golang:1.10-alpine AS builder

# Install git for fetching dependencies
RUN apk update && apk add --no-cache git

WORKDIR /app

# Cache dependencies specifically
COPY go.mod go.sum ./
RUN go mod download

COPY . .

# Build a static binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

# Stage 2: The Production Image
FROM alpine:3.7  

RUN apk --no-cache add ca-certificates

WORKDIR /root/

# Copy only the binary from the builder stage
COPY --from=builder /app/main .

CMD ["./main"]

This approach reduces image size from ~700MB to ~15MB. Smaller images mean faster push/pull times to your registry, which directly reduces total pipeline duration.

Caching Strategies in GitLab CI

Redownloading dependencies for every commit is insanity. If you are using GitLab CI (a favorite here in Europe), you must utilize global caching effectively. However, cache extraction can be slow if your compression settings are off or if the I/O is weak.

Here is a robust configuration for a Node.js project targeting a local runner:

stages:
  - build
  - test

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/

build_job:
  stage: build
  image: node:8.11
  script:
    - npm ci
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 week

# Only run heavy tests if build succeeds
test_job:
  stage: test
  image: node:8.11
  dependencies:
    - build_job
  script:
    - npm run test

Note the use of npm ci (introduced in npm 5.7). It is significantly faster and more reliable than npm install for CI environments because it skips package version resolution and installs directly from the lockfile.

The GDPR Angle: Data Sovereignty

With GDPR going into full enforcement on May 25th, where your code lives matters. CI/CD pipelines often contain production database dumps for integration testing, environment variables with API keys, or customer data snippets for debugging.

If you are using a US-based SaaS CI provider, you are transferring this data outside the EEA. Legal teams are scrambling right now to justify these flows. The pragmatic solution? Bring the runner home.

By hosting your GitLab Runner or Jenkins agent on a VPS in Norway (like CoolVDS), you ensure that:

  • Data stays within Norwegian jurisdiction (Datatilsynet is happy).
  • Latency to your local NIX (Norwegian Internet Exchange) connected infrastructure is minimal.
  • You have full control over the security perimeter.

System Tuning for Build Servers

A default Linux kernel is tuned for general usage, not the high-concurrency connections and file operations of a busy CI server. If you run your own Jenkins or GitLab runner on a VPS, apply these sysctl tweaks to handle the load.

Add these to /etc/sysctl.conf:

# Increase the number of file watchers (critical for Webpack/Watch mode)
fs.inotify.max_user_watches = 524288

# Allow more open files
fs.file-max = 2097152

# Widen the port range for high connection churn (Docker/network tests)
net.ipv4.ip_local_port_range = 1024 65535

# Reuse specific TCP connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1

# Increase backlog for incoming connections
net.core.somaxconn = 65535

After editing, run sysctl -p. These settings prevent the dreaded "Too many open files" errors during massive parallel test executions.

Comparison: Shared vs. Dedicated CI Resources

Feature SaaS / Shared Runner Self-Hosted on CoolVDS
Cost Model Per minute / Per user Fixed monthly (Predictable)
Disk Speed Variable (Noisy Neighbors) Consistent NVMe
Data Sovereignty Often US-based Strictly Norway/Europe
Customization Limited (Docker-in-Docker issues) Full Root Access

Infrastructure as Code (IaC) for Runners

Do not configure your build servers manually. It defeats the purpose of DevOps. Use Ansible to provision your CoolVDS instances. Here is a snippet to automate the setup of a Docker-ready runner:

---
- hosts: runners
  become: yes
  tasks:
    - name: Install required system packages
      apt:
        name: ['apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common']
        state: present

    - name: Add Docker GPG key
      apt_key:
        url: https://download.docker.com/linux/ubuntu/gpg
        state: present

    - name: Add Docker repository
      apt_repository:
        repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
        state: present

    - name: Install Docker CE
      apt:
        name: docker-ce
        state: present

    - name: Ensure Docker service is running
      service:
        name: docker
        state: started
        enabled: yes

Final Thoughts

Your CI pipeline is the heartbeat of your engineering team. When it lags, innovation lags. In 2018, there is no excuse for slow builds caused by hardware limitations. You need raw NVMe speed, KVM isolation to prevent CPU steal, and the legal safety of Norwegian data residency.

You can keep fighting with sluggish shared runners, or you can take control of your infrastructure.

Ready to cut your build times in half? Deploy a high-performance NVMe instance on CoolVDS in under 55 seconds and let your developers code, not wait.