Console Login

Stop Watching Progress Bars: Architecting High-Velocity CI/CD Pipelines in 2019

Stop Watching Progress Bars: Architecting High-Velocity CI/CD Pipelines in 2019

There is nothing more soul-crushing for a developer than staring at a blinking cursor in a Jenkins console log, waiting for a build that should take thirty seconds but actually takes ten minutes. In the DevOps world, time isn't just money—it's sanity. If your team deploys ten times a day, and every build lags by five minutes due to poor infrastructure, you are burning nearly an hour of productivity per developer, per day.

I recently audited a setup for a mid-sized fintech company in Oslo. Their pipeline was agonizingly slow. The culprit wasn't their code; it was their infrastructure. They were running heavy Java builds inside Docker containers on oversold, shared cloud instances hosted in Frankfurt. The I/O wait times were through the roof.

Today, we are going to fix this. We aren't just "optimizing"; we are surgically removing the fat from your CI/CD pipeline using tools available right now, in mid-2019. We will focus on disk I/O, Docker layer caching, and the critical importance of geographical proximity.

1. The Silent Killer: Disk I/O and the NVMe Necessity

CI/CD is basically a disk torture test. You git clone, you npm install (extracting thousands of tiny files), you compile binaries, and you build Docker images. If you are doing this on standard SSDs—or worse, spinning rust—on a noisy multi-tenant cloud, your CPU is spending half its life waiting for the disk controller.

To diagnose if your current runner is choking on I/O, fire up the terminal during a build and watch iostat:

iostat -x 1

Look at the %iowait column. If you consistently see numbers above 5-10% while your build is running, your storage is the bottleneck.

Pro Tip: On a CoolVDS instance, we strictly use NVMe storage passed through via KVM. This drastically reduces the I/O latency compared to standard network-attached block storage often found in budget VPS providers.

When configuring your build servers, you must tune the filesystem. For Linux-based runners (Ubuntu 18.04 LTS is my weapon of choice), ensure you aren't aggressively swapping out memory, which kills build speed.

sysctl -w vm.swappiness=1

Make this permanent in /etc/sysctl.conf. We want the RAM to do the work, not the disk.

2. Optimizing the Docker Engine for Speed

Most modern pipelines in 2019 rely on Docker. But out of the box, Docker can be inefficient if not configured for heavy lifting. The storage driver matters. By now, you should be using overlay2. It is the preferred storage driver for Linux and offers superior performance to the older devicemapper or aufs.

Check your current driver:

docker info | grep Storage

If it doesn't say overlay2, you need to update your /etc/docker/daemon.json. While you are there, let's add a local registry mirror. If you have multiple runners in a cluster, pulling the same base images (like node:10-alpine or openjdk:8) over the WAN every time is madness.

Configuration: /etc/docker/daemon.json

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": [
    "https://mirror.gcr.io"
  ],
  "max-concurrent-downloads": 10,
  "max-concurrent-uploads": 5,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Restart Docker with systemctl restart docker. Increasing max-concurrent-downloads utilizes the high bandwidth available on enterprise-grade VPS hosting, like the 1Gbps uplinks we provide on CoolVDS.

3. GitLab CI Runner: The "Concurrent" Trap

I see this configuration error constantly. Admins spin up a powerful server with 16 vCPUs but leave the GitLab Runner concurrency set to default. Or, they set it too high and the builds cannibalize each other's resources.

If you are managing your own runners (which you should, for security and speed), you need to balance the concurrent limit with your available RAM. A heavy webpack build can easily consume 2GB of RAM. If you have 8GB RAM and set concurrency to 5, you are going to crash.

Here is a battle-tested config.toml for a medium-sized runner instance:

concurrent = 4
check_interval = 0

[[runners]]
  name = "coolvds-fast-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN_HERE"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.0"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
    pull_policy = "if-not-present"
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]

Note the pull_policy = "if-not-present". This is crucial. It tells the runner: "If you already have this image, don't ask the registry for it." This saves seconds, sometimes minutes, on every single job.

4. Dockerfile Layering: Don't Break the Chain

Infrastructure can only do so much if your Dockerfile is garbage. In 2019, we still see developers copying their source code before installing dependencies. This invalidates the cache every time you change a line of code, forcing the builder to re-download the internet.

Here is the correct pattern. Observe how we copy the manifest files first:

FROM node:10.16-alpine

WORKDIR /app

# COPY PACKAGES FIRST
# This layer is cached unless package.json changes
COPY package.json package-lock.json ./

# INSTALL DEPENDENCIES
# This runs only when packages change
RUN npm ci --quiet

# NOW COPY SOURCE CODE
COPY . .

CMD ["npm", "start"]

This simple reordering allows Docker to reuse the RUN npm ci layer across builds, reducing build time from 2 minutes to 5 seconds for code-only changes.

5. Location, Location, Latency

We often ignore the speed of light. If your dev team is in Oslo or Bergen, and your CI/CD runner is in a data center in Virginia (US-East), you are adding roughly 100ms of latency to every handshake. For a git clone with thousands of objects, or uploading a 500MB artifact, that latency compounds.

Furthermore, we have to talk about data sovereignty. With GDPR now fully enforceable for over a year, many Norwegian companies prefer data to stay within the EEA or specifically inside Norway to satisfy local interpretation by Datatilsynet.

Hosting your CI/CD infrastructure locally reduces round-trip time (RTT). You can test this easily from your office network:

ping -c 5 185.x.x.x

If you aren't seeing single-digit latency to your infrastructure provider, you are feeling a lag that doesn't need to exist. CoolVDS infrastructure is optimized for the Nordic region. We peer directly at NIX (Norwegian Internet Exchange), meaning your data often doesn't even leave the country's backbone.

Conclusion: Own Your Pipeline

Managed CI/CD services are convenient, but they are "noisy neighbor" environments. You share CPU time with thousands of other developers. When you need consistent, repeatable performance, nothing beats a dedicated VPS with raw KVM virtualization.

By switching to a self-hosted runner on NVMe-backed storage, optimizing your Docker config for caching, and keeping your data close to your team in Norway, you can turn a 15-minute deployment ordeal into a 3-minute coffee break.

Don't let slow I/O kill your release cadence. Deploy a high-performance CI/CD runner on CoolVDS today and feel the difference raw power makes.