Stop Watching Progress Bars: A Guide to Brutally Fast CI/CD
There is nothing more soul-crushing than staring at a Jenkins console log for 45 minutes, only to see it fail because of a timeout. I've been there. In 2018, I inherited a pipeline for a major fintech client in Oslo that took an hour to deploy a simple hotfix. The developers were demoralized. The CTO was furious.
We fixed it. We dropped the build time to 6 minutes. We didn't do it with magic. We did it by understanding the hardware constraints underneath the abstraction layers.
If you think your slow pipeline is a software problem, you are likely wrong. It is usually an I/O problem, followed closely by a latency problem. Here is how to fix it, assuming you are running a modern stack on Linux.
1. The I/O Bottleneck: Why HDD and SATA SSDs Kill Builds
Continuous Integration is basically a torture test for disk I/O. Consider what happens during a standard npm install, maven package, or docker build:
- Thousands of small files are written and read (random I/O).
- Archives are decompressed.
- Binaries are linked.
I recently benchmarked a heavy Node.js build on a standard cloud instance versus a KVM instance backed by NVMe storage. The difference wasn't 20%. It was 300%. If your VPS provider is putting your CI runners on shared mechanical storage or throttled SATA SSDs, you are paying for your developers to drink coffee.
Pro Tip: Always check your disk wait times. Runiostat -x 1during a build. If%iowaitconsistently spikes above 5-10%, your storage is the bottleneck. Move to NVMe.
2. Docker Layer Caching: You're Doing It Wrong
Most Dockerfiles I audit are structured for readability, not performance. This is a mistake. Docker caches layers based on the instruction string and the file contents. If you copy your source code before installing dependencies, you invalidate the cache for the dependency install step every time you change a line of code.
Here is the wrong way:
FROM node:14-alpine
WORKDIR /app
COPY . .
RUN npm install # <-- Re-runs every time code changes
CMD ["node", "server.js"]
Here is the optimized way. This is standard practice, yet I still see seniors missing it in production.
FROM node:14-alpine
WORKDIR /app
# Copy only the dependency definitions first
COPY package.json package-lock.json ./
# This layer is cached unless dependencies change
RUN npm ci --only=production
# NOW copy the source code
COPY . .
CMD ["node", "server.js"]
By simply reordering these lines, we saved 4 minutes per build on a project with heavy dependencies.
3. Local Caching & Registry Proximity
If your team is in Norway, why are you pulling base images from US East? Latency matters. Throughput matters. When fifty containers spin up simultaneously, your external bandwidth saturates.
We solve this by hosting a pull-through cache registry locally. If you run this on a CoolVDS instance in Oslo, your latency to the NIX (Norwegian Internet Exchange) is negligible. You aren't fighting for trans-Atlantic bandwidth.
Here is a basic Nginx configuration to proxy a registry (or any artifact server) with caching enabled. This setup saved us terabytes of transfer fees last year.
proxy_cache_path /var/cache/nginx/registry levels=1:2 keys_zone=registry_cache:10m max_size=10g inactive=7d use_temp_path=off;
server {
listen 443 ssl;
server_name registry.internal.yoursite.no;
# SSL Configs omitted for brevity
location / {
proxy_pass https://registry-1.docker.io;
proxy_set_header Host registry-1.docker.io;
proxy_set_header X-Real-IP $remote_addr;
# Caching logic
proxy_cache registry_cache;
proxy_cache_key $uri;
proxy_cache_valid 200 7d;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
}
}
4. The "Noisy Neighbor" Problem
In a CI environment, consistency is as important as speed. If Build A takes 5 minutes and Build B takes 15 minutes because another tenant on the host machine decided to mine crypto, your metrics are useless.
This is where virtualization technology matters. Container-based VPS (like OpenVZ or LXC) often allow CPU stealing. We rely on KVM (Kernel-based Virtual Machine) for CoolVDS instances. KVM provides harder isolation. Your RAM is yours. Your CPU cycles are yours. This predictability is essential for pipelines that run strict integration tests where timing conditions might cause flaky failures.
GitLab CI Optimization Example
If you are using GitLab CI, ensure you are using the distributed caching mechanism correctly to persist `node_modules` or `vendor` folders between stages.
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .m2/repository/
build_job:
stage: build
script:
- npm install
- npm run build
tags:
- coolvds-runner-oslo
test_job:
stage: test
script:
- npm test
# No need to npm install again, cache handles it
5. Compliance and Data Sovereignty
We operate in a post-GDPR world. The Datatilsynet (Norwegian Data Protection Authority) is not lenient. When your CI/CD pipeline processes production databases for sanitization or testing, that data is landing on a disk somewhere.
If you use a generic cloud builder, do you know where that physical disk sits? Is it in Frankfurt? Virginia? Or Oslo? Keeping your build infrastructure domestic isn't just about latency—it's about legal risk reduction. Hosting on Norwegian soil simplifies your Article 28 data processing agreements significantly.
Performance Tuning Kernel Parameters
Finally, default Linux distros are tuned for general usage, not high-throughput build servers. We apply these sysctl tweaks to our build agents to handle high connection churn and file open rates typical in heavy pipelines.
# /etc/sysctl.conf
# Increase max open files (essential for heavy npm/java builds)
fs.file-max = 2097152
# Increase range of ephemeral ports for high connection rates
net.ipv4.ip_local_port_range = 1024 65535
# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1
# Increase TCP buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
Apply these with sysctl -p. You will likely see fewer "connection reset" errors during heavy parallel test execution.
Summary
Speed is a feature. If your developers wait 30 minutes to see if they broke the build, they switch contexts. Focus is lost. Productivity dies.
Optimize your Dockerfiles. Cache aggressively. But most importantly, ensure your underlying infrastructure has the raw I/O throughput to handle the load. We built CoolVDS on NVMe and KVM specifically to solve this problem for professionals who can't afford to wait.
Don't let slow hardware kill your software velocity. Deploy a high-performance runner on CoolVDS in Oslo today and cut your build times in half.