Console Login

Stop Watching Progress Bars: Optimizing CI/CD Pipelines on Norwegian Infrastructure (2018 Edition)

Stop Watching Progress Bars: Optimizing CI/CD Pipelines on Norwegian Infrastructure

There is nothing more soul-crushing than pushing a hotfix on a Friday afternoon and watching a Jenkins progress bar crawl for 45 minutes. I’ve been there. Last month, I audited a setup for a client in Oslo where their deployment pipeline was taking an hour. The culprit wasn't their code; it was the underlying infrastructure gasping for air.

In 2018, "it works on my machine" is no longer a valid excuse, but "it's slow on the build server" is a valid complaint. If your Continuous Integration (CI) runners are hosted on oversold hardware with spinning rust (HDD), you are burning money on developer idle time.

This guide is for the sysadmins and DevOps engineers who are tired of complaining about build times. We’re going to look at optimization from the metal up to the Dockerfile.

The Hidden Bottleneck: Disk I/O

Most people blame CPU for slow builds. They are usually wrong. Compiling code is CPU intensive, yes, but think about what a CI job actually does:

  • git clone (Disk Write)
  • npm install / mvn install (Massive Disk I/O with thousands of small files)
  • docker build (Disk Read/Write for layers)
  • Artifact archiving (Disk Write)

If you are running this on a standard VPS with shared storage, your iowait is likely through the roof. I recently ran a benchmark on a budget VPS provider against a CoolVDS KVM instance. The difference in npm install time was 3x. Why? NVMe storage.

Check your current wait times. If you have a runner up right now, SSH in and run:

iostat -xz 1

If your %util is hitting 100% while your CPU is idling at 20%, upgrade your storage. You cannot tune your way out of bad physics.

Docker: The Multi-Stage Revolution

Since Docker 17.05, multi-stage builds have been the single best way to keep pipelines fast and images small. Yet, I still see pipelines in production pushing 1.5GB images because they include the Go compiler or the JDK in the runtime image.

Network latency matters here. Pushing a 1GB image to a registry in Frankfurt from an office in Bergen takes time. Pushing a 20MB Alpine binary takes seconds. By utilizing multi-stage builds, we reduce the network overhead significantly.

Here is a proper multi-stage Dockerfile for a Go application, optimized for size:

# Build Stage
FROM golang:1.10-alpine AS builder

# Install git (required for fetching dependencies)
RUN apk update && apk add --no-cache git

WORKDIR /app

# Copy dependency definitions first to utilize layer caching
COPY go.mod go.sum ./
RUN go mod download

COPY . .

# Build the binary. -w -s strips debug information to reduce size.
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -a -installsuffix cgo -o main .

# Final Stage
FROM alpine:3.8

RUN apk --no-cache add ca-certificates

WORKDIR /root/

# Copy only the binary from the builder stage
COPY --from=builder /app/main .

CMD ["./main"]

This approach reduced one client's deployment artifact from 800MB to 12MB. That is a massive reduction in bandwidth usage and deployment time.

Caching Strategies: Don't Download the Internet

Every time your pipeline runs, are you downloading the same Maven artifacts or NPM packages? Stop it. You need a local caching strategy. For GitLab CI (which is gaining massive traction here in Europe), you can define cache paths.

However, the cache needs to be fast. If compressing and uploading the cache takes longer than downloading the dependencies, you've lost. This is where high IOPS (Input/Output Operations Per Second) becomes critical. On CoolVDS NVMe instances, the cache extraction is nearly instantaneous.

Here is a robust .gitlab-ci.yml configuration utilizing caching:

stages:
  - build
  - test

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/

build_app:
  image: node:8.11
  stage: build
  script:
    - npm ci
    - npm run build
  artifacts:
    expire_in: 1 hour
    paths:
      - dist/

test_app:
  image: node:8.11
  stage: test
  script:
    - npm run test
  dependencies:
    - build_app
Pro Tip: Use npm ci instead of npm install in your CI pipelines. Introduced in NPM 5.7, it bypasses package.json and installs directly from the lockfile, which is significantly faster and more reliable for reproducible builds.

Infrastructure As Code: The Runner

Don't rely on shared runners provided by SaaS platforms if you have strict performance or compliance requirements. With GDPR in full effect since May, knowing exactly where your code is processed is paramount. Running your own GitLab Runner or Jenkins agent on a VPS in Norway ensures data sovereignty.

Here is how to register a Docker runner on a CoolVDS instance using the official GitLab repository. Note the use of overlay2 storage driver, which is the standard for performance in 2018.

# 1. Install Docker (if not present)
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

# 2. Add the GitLab Runner repo
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash

# 3. Install the runner
sudo apt-get install gitlab-runner

# 4. Register the runner
sudo gitlab-runner register \
  --non-interactive \
  --url "https://gitlab.com/" \
  --registration-token "YOUR_TOKEN_HERE" \
  --executor "docker" \
  --docker-image alpine:latest \
  --description "norway-nvme-runner" \
  --tag-list "docker,nvme,norway" \
  --run-untagged="true" \
  --locked="false" \
  --access-level="not_protected"

Once registered, you need to optimize the runner configuration file at /etc/gitlab-runner/config.toml to handle concurrent jobs without choking the CPU.

concurrent = 4 check_interval = 0

Setting concurrent to match your vCPU count is usually a sweet spot.

Network Latency and Mirrors

If your servers are in Oslo and you are pulling images from Docker Hub in the US, you are fighting physics. Latency adds up.

I recommend setting up a pull-through cache registry. This proxies requests to Docker Hub and caches the layers locally. The next time a runner requests node:8-alpine, it pulls it from your local LAN speed (or virtual network) rather than the public internet.

Here is a basic Nginx configuration to frontend a local registry:

server {
    listen 443 ssl;
    server_name registry.your-domain.no;

    # SSL Config (LetsEncrypt)
    ssl_certificate /etc/letsencrypt/live/registry.your-domain.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/registry.your-domain.no/privkey.pem;

    client_max_body_size 2G;

    location / {
        proxy_pass http://localhost:5000;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 900;
    }
}

The Kernel Tuning You Forgot

Default Linux kernel settings are often too conservative for heavy CI/CD workloads involving hundreds of containers spinning up and down. You might hit limits on open files or connection tracking.

Edit your /etc/sysctl.conf to widen the bottleneck:

fs.file-max = 2097152 net.ipv4.ip_local_port_range = 1024 65535 net.core.somaxconn = 1024

Apply it with sysctl -p. This prevents those obscure "Connection reset by peer" errors during heavy parallel test execution.

Why Hosting Choice Matters

You can optimize your Dockerfile all day, but if your neighbor on a shared host is mining cryptocurrency or running a heavy Magento re-index, your build times will fluctuate. This is the "noisy neighbor" effect.

This is why we architect CoolVDS around KVM (Kernel-based Virtual Machine) and NVMe. KVM provides strict resource isolation—your RAM is your RAM. And NVMe ensures that when your pipeline needs to write 2GB of artifacts, it happens in seconds, not minutes.

For Norwegian businesses, the added benefit is compliance. Keeping your CI/CD artifacts and source code within national borders simplifies your GDPR documentation significantly compared to using US-based cloud build farms.

Final Thoughts

Optimization is an iterative process. Start by measuring your disk I/O latency. If it’s high, move to better infrastructure. Then, implement multi-stage builds. Finally, tune your caching.

Your developers cost significantly more than a proper VPS. Don't let them sit idle waiting for a build. Deploy a high-performance runner on CoolVDS today and get back to shipping code.