Stop Watching Progress Bars: Optimizing CI/CD Pipelines on VPS Infrastructure
It is 3:00 AM. You are staring at a Jenkins console output, watching npm install fetch the same packages it fetched twenty minutes ago. The build fails because of a timeout. If this sounds familiar, your CI/CD pipeline isn't working for you; you are working for it. In the high-stakes world of DevOps, waiting on infrastructure is the ultimate sin.
Most developers treat Continuous Integration servers as black boxes—you throw code in, and hopefully, a build comes out. But when you are running on standard cloud instances, you are often fighting a hidden war against I/O wait times and noisy neighbors. I have spent the last decade debugging stalled pipelines, and 90% of the time, the bottleneck isn't your code. It's the storage subsystem and the network latency.
Let's look at how to architect a pipeline that actually respects your time, specifically within the context of the Norwegian hosting market where data sovereignty (thanks, GDPR) and latency to Oslo matter.
The Hidden Bottleneck: Disk I/O
CI/CD is inherently I/O intensive. Whether you are extracting artifacts, compiling Java binaries, or building Docker images, you are hammering the disk. On a cheap VPS using shared SATA SSDs (or heaven forbid, HDDs), your iowait will skyrocket. The CPU sits idle while the disk struggles to write.
First, diagnose the problem. Log into your runner and install sysstat:
apt-get update && apt-get install -y sysstat
iostat -x 1 10
Look at the %iowait column. If you are seeing numbers consistently above 5-10% during a build, your storage is choking. This is where infrastructure choice becomes binary: you either suffer, or you upgrade to NVMe.
Pro Tip: NVMe storage isn't just about raw throughput; it's about queue depth. SATA interfaces queue commands serially. NVMe handles thousands of parallel queues. When running parallel test suites (e.g., PHPUnit with --processes=4), this parallelization is critical.
At CoolVDS, we standardized on NVMe for our KVM instances precisely for this reason. We saw build times for a standard Magento 2 deployment drop from 18 minutes to 6 minutes just by moving from SATA SSD to NVMe storage. No config changes, just better hardware.
Optimizing Docker Builds: Layer Caching
If you are using Docker (and in 2018, who isn't?), your Dockerfile structure determines your speed. Docker caches layers based on the instruction string. If a line changes, that cache is busted for all subsequent lines.
The Wrong Way:
FROM node:10
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
Every time you change a single line of code in your source files, the COPY . . instruction changes the checksum, invalidating the cache. Docker then re-runs npm install. This wastes bandwidth and time.
The Optimized Way:
FROM node:10
WORKDIR /app
# Copy package definitions first
COPY package.json package-lock.json ./
# Install dependencies (Cached unless dependencies change)
RUN npm install
# Copy the rest of the code
COPY . .
CMD ["npm", "start"]
By copying the manifest files separately, you ensure npm install only runs when you actually add or remove a dependency. This seems basic, but I audit pipelines weekly where this is missed.
GitLab CI Runner Configuration
For those running GitLab CI (a solid choice for keeping data in-house compliant with Datatilsynet requirements), the default configuration of the gitlab-runner is rarely sufficient for high-load environments.
You need to tune the /etc/gitlab-runner/config.toml to handle concurrency without killing the host. If you are hosting your runner on a CoolVDS instance with 4 vCPUs, do not set concurrency to 10.
concurrent = 4
check_interval = 0
[[runners]]
name = "CoolVDS-Oslo-Runner-01"
url = "https://gitlab.example.com/"
token = "PROJECT_TOKEN"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Note the privileged = true flag. While security purists might scream, for building Docker-in-Docker (dind) pipelines, it is often a necessary evil. Just ensure your runner is on an isolated network segment.
Network Latency and the Nordic Advantage
Why does geography matter for CI/CD? Because npm install, pip install, and docker pull are network operations. If your runner is in a datacenter in Virginia, USA, but your private Docker registry and your developer team are in Oslo, you are adding 100ms+ of latency to every single request.
When you aggregate thousands of small file requests (common in Node.js applications), that latency kills performance.
| Location | Avg Latency to Oslo (NIX) | GDPR Compliance Risk |
|---|---|---|
| US East (Virginia) | ~110 ms | High (Cloud Act) |
| Germany (Frankfurt) | ~30 ms | Low |
| Norway (CoolVDS) | < 5 ms | None |
Hosting your CI infrastructure locally on CoolVDS guarantees single-digit latency to the Norwegian Internet Exchange (NIX). Plus, with the implementation of GDPR earlier this year, keeping your source code—which often contains sensitive logic or hardcoded secrets (we all do it, even if we shouldn't)—within Norwegian borders satisfies the strictest interpretation of data residency.
Advanced: Using Local Caching Proxies
If you have multiple pipelines running the same builds, stop fetching from the public internet every time. Set up a local nexus for artifacts or a caching proxy.
Here is a quick Nginx configuration snippet to act as a reverse proxy cache for your internal artifacts, significantly reducing bandwidth usage:
proxy_cache_path /var/cache/nginx/npm levels=1:2 keys_zone=npm_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
listen 80;
server_name npm-cache.internal;
location / {
proxy_cache npm_cache;
proxy_pass https://registry.npmjs.org;
proxy_set_header Host registry.npmjs.org;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
}
}
Deploy this on a small CoolVDS instance, point your runners to it via npm config set registry http://npm-cache.internal, and watch your dependency resolution time drop to near zero.
The Hardware Reality
You can tweak configurations all day, but you cannot software-patch bad hardware. When a cloud provider oversubscribes their CPU or limits your IOPS to incentivize you to upgrade, your pipeline suffers.
We built CoolVDS because we were tired of "burstable" instances that burst for 30 seconds and then throttled. Our KVM instances provide dedicated resources. If you pay for 4 cores, you get 4 cores. If you need high I/O for database integration tests, our NVMe storage delivers it without questions.
Final Thoughts
Your time is the most expensive resource in the company. Spending hours a week waiting for builds is a waste of capital and morale. Optimize your Dockerfile, configure your runner for concurrency, and run it on hardware that keeps up with you.
Ready to cut your build times in half? Deploy a high-performance NVMe KVM instance on CoolVDS today and see what raw, unthrottled I/O does for your pipeline.