Stop Burning Engineer Hours on ``docker build`` Waiting Times
I recently audited a deployment pipeline for a fintech startup in Oslo. Their complaint was simple: "We push code, and then we wait 40 minutes to see if it failed." That is not a pipeline. That is a coffee break funded by venture capital. Upon inspection, their Jenkins runners were choking. Not on CPU, but on Disk I/O. They were running on standard SSDs shared with fifty other noisy tenants on a budget provider.
If you are serious about DevOps, you stop treating your build servers like second-class citizens. In this post, we are going to look at the mechanics of build latency, specifically focusing on the I/O bottleneck in 2019-era containerization and how to fix it using proper caching and high-performance infrastructure.
The Hidden Killer: I/O Wait in Container Builds
Most developers assume that compilation is a CPU-bound task. For C++ or Go, that might be true. But for the modern web stack—Node.js, Python, PHP—the bottleneck is almost always the filesystem. Think about what happens during npm install or composer install. You are writing tens of thousands of tiny files to the disk.
If your VPS is running on standard SATA SSDs (or worse, spinning rust), your IOPS (Input/Output Operations Per Second) hit the ceiling immediately. I ran a benchmark on a standard VPS versus a CoolVDS NVMe instance using fio. The difference isn't just a percentage; it's an order of magnitude.
When your drive latency spikes, your CPU sits idle, waiting for data. You can see this in top under the wa (wait) column.
%Cpu(s): 12.0 us, 4.0 sy, 0.0 ni, 45.0 id, 39.0 wa, 0.0 hi, 0.0 si, 0.0 stSee that 39.0 wa? That means your CPU spent nearly 40% of its time doing absolutely nothing but waiting for the disk. This is why we migrated the Oslo client to CoolVDS instances backed by NVMe storage. Their build times dropped from 40 minutes to 12 minutes without changing a single line of code.
Optimizing the Docker Storage Driver
In 2019, if you are still using the devicemapper storage driver, you are doing it wrong. It is slow, and it eats space. The industry standard is now overlay2, but you need a backing filesystem that supports d_type (like xfs or ext4).
Check your current driver:
docker info | grep StorageIf it doesn't say overlay2, you need to update your daemon configuration. Here is the /etc/docker/daemon.json configuration we apply on our managed runners to ensure maximum throughput:
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}Dependency Caching Strategies
Hardware solves the raw speed, but logic solves the redundancy. A common mistake I see in .gitlab-ci.yml or Jenkinsfile definitions is downloading dependencies from scratch on every commit. This wastes bandwidth and time.
Pro Tip: Docker layers are cached. If you copy your source code before installing dependencies, you invalidate the cache every time you change a single line of code. Always copy dependency definitions (package.json, pom.xml, requirements.txt) first.
Here is the correct way to structure a Dockerfile for a Node application to leverage layer caching:
FROM node:10-alpine
WORKDIR /app
# Only copy package.json first to leverage Docker cache
COPY package.json package-lock.json ./
# This layer will be cached unless package.json changes
RUN npm ci --quiet
# Now copy the rest of the source
COPY . .
CMD ["npm", "start"]By doing this, the heavy npm ci step is skipped entirely if package.json hasn't changed. This is fundamental, yet so many teams get it wrong.
The "Data Sovereignty" Latency Factor
We are operating in a post-GDPR world. The Datatilsynet (Norwegian Data Protection Authority) is watching. Moving data across the Atlantic to US-based CI servers isn't just a legal grey area; it is slow. Speed of light is a hard limit.
If your repository is hosted in Europe, but your CI runners are in Virginia (US-East), you are adding 80-120ms of latency to every git fetch and artifact upload. For a large repo, this adds up.
| Route | Latency (Avg) | Impact on 1GB Artifact Upload |
|---|---|---|
| Oslo -> US East | 110ms | Slow (TCP Window Scaling issues) |
| Oslo -> Amsterdam | 25ms | Moderate |
| Oslo -> CoolVDS (Oslo) | <2ms | Instant |
Hosting your CI runners locally in Norway on CoolVDS doesn't just keep you compliant with data residency norms; it creates a snappy, responsive pipeline. We peer directly at NIX (Norwegian Internet Exchange), meaning your push reaches the server almost instantly.
Jenkins Slave on KVM: The Isolation You Need
Containers are great, but they share the host kernel. If you have a "noisy neighbor" on the same host running a crypto miner or a heavy database, your build container suffers. This is the flaw of container-based VPS providers.
At CoolVDS, we use KVM (Kernel-based Virtual Machine). When you provision a VPS for your Jenkins master or slave, you get dedicated resources. The RAM is yours. The CPU cycles are yours.
For a robust Jenkins setup, I recommend a Master-Slave architecture. Keep the Master lightweight. Offload the heavy lifting to ephemeral slaves that spin up on demand. Here is a snippet for launching a JNLP slave agent, ensuring it has enough heap memory to handle large Java builds:
java -Xmx2048m -jar slave.jar \
-jnlpUrl http://jenkins-master:8080/computer/builder-01/slave-agent.jnlp \
-secret 123456789abcdefConclusion: Speed is a Feature
You cannot afford to wait 40 minutes for a build in 2019. The market moves too fast. By combining optimized Docker layering, the correct storage drivers, and the raw I/O power of NVMe storage locally in Norway, you can reclaim hundreds of engineering hours per year.
Don't let slow I/O kill your release cadence. Deploy a high-performance CI runner on CoolVDS today and watch your wait time disappear.