You Are Losing Money Waiting for Docker Builds
I recently audited a deployment pipeline for a fintech startup in Oslo. Their developers were spending an average of 45 minutes per day just staring at Jenkins progress bars. That is not a coffee break; that is a massive operational leak. When we dug into the metrics, the bottleneck wasn't CPU. It wasn't RAM. It was I/O wait time and network latency resolving to US-based artifact repositories.
If you are running CI/CD on standard shared hosting or capped cloud instances, you are effectively throttling your own innovation. Here is the reality: modern package managers like npm, pip, and cargo are disk murderers. They generate thousands of tiny read/write operations. On a standard HDD or a throttled SSD, your pipeline stalls.
This guide cuts through the noise. We are going to optimize a GitLab CI pipeline running on a localized Norwegian VPS to achieve sub-minute build times. No magic, just physics and better configurations.
The Hardware Reality: Why NVMe Matters
Let's talk about the physical layer. Most providers oversell their storage. In a CI environment, you need high random IOPS (Input/Output Operations Per Second). I ran a benchmark comparing a standard SATA SSD VPS against a CoolVDS NVMe instance. The difference in compiling a standard React application was staggering.
Do not take my word for it. Run this fio command on your current build server. If your IOPS are under 10k, you have found your bottleneck.
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
On a proper CoolVDS instance in our Oslo datacenter, we consistently see random write speeds that handle concurrent Docker layer extractions without sweating. This is critical because Docker implies heavy I/O overhead when extracting layers from the overlay2 filesystem.
Optimizing the Docker Daemon for Speed
Out of the box, Docker is optimized for compatibility, not speed. In a CI/CD context, specifically within Norway, we need to address two things: the storage driver and the registry mirror.
Since Docker Hub introduced strict rate limiting, pulling images directly from the internet for every build is suicidal for productivity. You need a pull-through cache. Furthermore, using the overlay2 driver is non-negotiable for performance.
Here is the optimized /etc/docker/daemon.json configuration we use for high-throughput build agents:
{
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
},
"registry-mirrors": ["https://mirror.gcr.io"]
}
Pro Tip: If you are running multiple runners on a single CoolVDS node, ensure you increase your file descriptors (ulimits). A heavy npm install inside a container can easily hit the default 1024 limit, causing cryptic build failures.
The Network Factor: Latency to NIX
Latency is the silent killer. If your VPS is in Frankfurt but your developers and test environments are in Oslo, you are adding unnecessary milliseconds to every handshake. For a single request, it's negligible. for a git clone or pushing a 2GB Docker image, it adds up.
CoolVDS peers directly at NIX (Norwegian Internet Exchange). This keeps traffic local. Data doesn't leave the country unless you tell it to. This brings us to the elephant in the room: GDPR and Schrems II.
By hosting your CI runners and artifacts on Norwegian soil, you simplify compliance. You don't need to explain to Datatilsynet why your source code (which often contains intellectual property and potentially PII in test databases) is transiting through a US-owned cloud provider's region.
GitLab CI Configuration: Caching Done Right
A fast disk is useless if you download the internet every time you commit code. You must configure aggressive caching. However, caching node_modules blindly can lead to corruption. Here is a robust .gitlab-ci.yml strategy that uses a lock file key to invalidate caches only when dependencies change.
stages:
- build
- test
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
cache:
key:
files:
- package-lock.json
paths:
- .npm
build_job:
stage: build
image: node:16-alpine
script:
- npm ci --cache .npm --prefer-offline
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 hour
Notice the use of npm ci instead of npm install. This command is strictly meant for automated environments. It deletes node_modules and installs exactly what is in the lockfile. Combined with the --prefer-offline flag and the NVMe storage on CoolVDS, this step drops from minutes to seconds.
Kernel Tuning for Heavy Loads
When you run fifty parallel tests, the Linux kernel's default network stack can become a bottleneck. We often see connection tracking tables fill up.
Add these lines to your /etc/sysctl.conf to widen the highway:
# Increase the size of the receive queue
net.core.netdev_max_backlog = 5000
# Increase the maximum number of connections
net.netfilter.nf_conntrack_max = 262144
# Decrease the time to keep sockets in FIN-WAIT-2
net.ipv4.tcp_fin_timeout = 15
Apply them with sysctl -p. This prevents your build server from dropping packets during intense integration test suites.
The Verdict
You can throw money at slow pipelines by buying larger instances from hyperscalers, or you can optimize the architecture. High-performance NVMe storage, local peering at NIX, and proper kernel tuning provide a superior price-to-performance ratio.
For dev teams in Norway, the choice is logical. You get lower latency, strict data sovereignty compliance, and hardware that doesn't choke on I/O. Don't let your infrastructure be the reason you miss a deployment window.
Ready to speed up your builds? Deploy a high-frequency NVMe instance on CoolVDS today and experience the difference raw I/O makes.