Why Your Jenkins Build Takes 20 Minutes (And Why It Should Take 3)
It is 16:30 on a Friday. You push a critical hotfix to `master`. The tests pass locally in seconds. You trigger the build. Then you wait.
And wait.
If you are running your Continuous Integration (CI) pipeline on a standard, over-sold VPS, you are essentially burning money. In 2018, the most expensive resource in your company isn't RAM or CPU cycles—it is the patience of your engineering team. I have seen decent sysadmins try to optimize `npm install` or `mvn clean install` for weeks, tweaking caching layers, only to realize the problem isn't the software. It's the disk.
Let's get technical about why your pipelines are stalling and how we fix this using KVM, NVMe, and compliant Norwegian infrastructure.
The Silent Killer: I/O Wait
Most developers assume a slow build is a CPU problem. They scale up vCPUs and see zero improvement. Why? Because modern build processes are heavily I/O bound.
Consider a standard Node.js project. `node_modules` can easily contain 30,000+ small files. When you run `npm install`, the filesystem has to perform thousands of random write operations. If you are on a traditional spinning HDD or even a cheap SATA SSD shared with fifty other noisy neighbors, your `iowait` spikes. Your CPU sits idle, waiting for the disk controller to catch up.
Here is how you diagnose this. SSH into your build runner during a job and check `iostat`:
$ iostat -x 1
avg-cpu: %user %nice %system %iowait %steal %idle
12.4 0.0 2.1 45.5 0.0 40.0
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 24.00 2.00 180.00 8.00 4200.00 46.24 2.50 12.50 4.00 13.00 5.50 99.10
See that %iowait at 45.5%? That means nearly half the time, your expensive CPU is doing absolutely nothing. See %util at 99.10%? Your disk is saturated. This is unacceptable for a production environment.
Optimization 1: Docker Layer Caching (The Software Fix)
Before we talk hardware, ensure you aren't shooting yourself in the foot with Docker. In 2018, Docker 18.05 is the standard, and multi-stage builds are mature. Yet, I still see `Dockerfile` setups like this:
# BAD PRACTICE
FROM node:10-alpine
COPY . .
RUN npm install
CMD ["npm", "start"]
Every time you change a single line of code in `src/`, Docker invalidates the cache for the `COPY . .` layer, forcing `npm install` to run from scratch. This kills your I/O.
The Fix:
# OPTIMIZED FOR 2018
FROM node:10-alpine
WORKDIR /app
# Copy package files first
COPY package.json package-lock.json ./
# Install dependencies (cached unless dependencies change)
RUN npm install --production
# Then copy source code
COPY . .
CMD ["npm", "start"]
This ensures you only hit the disk for dependencies when you actually change dependencies.
Optimization 2: The Hardware (The Real Fix)
Software tweaks only go so far. If you are serious about DevOps, you need NVMe (Non-Volatile Memory Express). Unlike SATA SSDs which were designed for hard drives, NVMe connects directly to the PCIe bus.
At CoolVDS, we don't use spinning rust. We use enterprise-grade NVMe storage. Here is the difference in IOPS (Input/Output Operations Per Second) we see in our Oslo benchmarks:
| Storage Type | Random Read IOPS | Latency |
|---|---|---|
| HDD (7200 RPM) | ~80-120 | ~4-10 ms |
| SATA SSD (Standard VPS) | ~5,000-10,000 | ~0.2 ms |
| CoolVDS NVMe | ~300,000+ | <0.05 ms |
Pro Tip: When configuring your Jenkins agent or GitLab Runner, verify the underlying filesystem. We recommend usingext4with thenoatimeflag to reduce unnecessary write operations on every file read.
The GDPR Elephant in the Room
It has been one month since GDPR (General Data Protection Regulation) went into full enforcement on May 25th. If you are a Norwegian company, the days of casually storing your build artifacts and database dumps in an S3 bucket in Virginia are effectively over.
The Norwegian Data Protection Authority (Datatilsynet) is watching. If your CI pipeline processes production data—sanitized or not—for integration testing, that data needs to stay within the EEA (European Economic Area). Latency is another factor. If your developers are in Oslo or Bergen, why route your git pushes through Frankfurt or London?
CoolVDS infrastructure is physically located in Norway. We peer directly with NIX (Norwegian Internet Exchange). This gives you sub-millisecond latency to local ISPs and keeps your data strictly under Norwegian and European jurisdiction. This is not just performance; it is compliance insurance.
Configuring GitLab Runner for Concurrency
If you are moving away from Jenkins to GitLab CI (a smart move this year), ensure your `config.toml` allows for concurrency, but limit it based on your vCPU count to prevent context switching overhead.
On a CoolVDS instance with 4 vCPUs, I recommend the following configuration:
concurrent = 4
check_interval = 0
[[runners]]
name = "coolvds-nvme-runner-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
# Use overlay2 storage driver for speed
storage_driver = "overlay2"
The Verdict
You can spend weeks optimizing your build scripts, or you can solve the root cause. CI/CD pipelines are disk-intensive workloads. They demand high IOPS and low latency.
Don't let a $10/month saving on hosting cost you thousands in developer downtime. Deploy a KVM-based, NVMe-backed instance. Keep your data in Norway. Keep your builds green and fast.
Ready to cut your build time in half? Deploy a high-performance Runner on CoolVDS today.