Stop Burning NOK: The Architect’s Guide to Slashing CI/CD Build Times
There is a specific kind of silence that fills an open-plan office in Oslo around 14:00. It’s not focus. It’s the sound of expensive engineers waiting for Jenkins to finish a build. I recently audited a fintech startup in Barcode where the lead developer spent an average of 45 minutes a day waiting on the pipeline. At a conservative rate of 1,500 NOK/hour, that one developer is burning nearly 15,000 NOK a month just staring at a console log.
We often obsession over micro-optimizations in our Go code or clean up React components, yet we run our infrastructure on strangled, noisy-neighbor VPS environments that choke on I/O. If your npm install takes three minutes, it’s not just node_modules bloat; it’s likely your disk queue length.
Here is how we fix the pipeline bottlenecks using 2020-era best practices, focusing on the brutal reality of hardware limitations and Docker caching strategies.
The Hidden Bottleneck: It’s Always I/O
Most CI/CD jobs are disk-bound. Unpacking artifacts, compiling binaries, pulling Docker images—these are I/O intensive operations. If you are running your runners on standard SATA SSDs (or worse, spinning rust) in a shared cloud environment, your IOPS (Input/Output Operations Per Second) are being capped. You are fighting for throughput with the other 500 tenants on that hypervisor.
Pro Tip: Runiostat -x 1on your CI runner during a build. If your%utilhits 100% while your CPU idles at 20%, your hosting provider is robbing you of productivity. This is why we default to CoolVDS NVMe instances for build agents—the I/O throughput is dedicated, not just promised.
Optimization 1: Docker Layer Caching (The Right Way)
In 2020, Multi-stage builds are mandatory. But even with multi-stage builds, I see developers invalidating their cache in step one. Docker layers are cached based on the instruction string and the files changed.
The Wrong Way:
FROM node:12-alpine
WORKDIR /app
COPY . .
# This runs every time any file changes, even a README update
RUN npm install
CMD ["node", "index.js"]
The Battle-Hardened Way:
FROM node:12-alpine AS builder
WORKDIR /app
# Copy only manifests first to leverage cache
COPY package.json package-lock.json ./
# This layer is now cached unless dependencies actually change
RUN npm ci --quiet
# NOW copy the source code
COPY . .
RUN npm run build
FROM node:12-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/main.js"]
By copying package.json separately, we ensure that the heavy npm ci step is cached by the Docker daemon unless you actually add a dependency. On a CoolVDS instance with high-speed NVMe, restoring this layer takes milliseconds.
Optimization 2: The "Database on Ramdisk" Trick
Integration tests that hit a real database are slow. They require writing to the disk for transaction logs. For a CI pipeline, durability does not matter. If the build fails, we destroy the container anyway. We don't need ACID compliance; we need speed.
If you are using MySQL/MariaDB in your pipeline, configure it to ignore disk syncing. This is dangerous in production, but a savior in CI.
In your my.cnf or command arguments:
[mysqld]
# Disable disk sync for speed
innodb_flush_log_at_trx_commit = 0
sync_binlog = 0
innodb_doublewrite = 0
This configuration forces InnoDB to write to the log buffer rather than flushing to disk on every commit. On a recent Magento deployment, this reduced the integration test suite runtime from 12 minutes to 4 minutes.
Optimization 3: Geography Matters (GDPR & Latency)
Why is your repo in GitHub (US), your CI runner in AWS Frankfurt (Germany), and your staging server in Oslo? The speed of light is a hard limit. Furthermore, with the growing scrutiny around Privacy Shield and data transfers, keeping your build artifacts within Norway is not just a performance tweak—it's a compliance safety net.
We recommend hosting a self-hosted GitLab instance or GitLab Runners on local infrastructure. By peering directly at the NIX (Norwegian Internet Exchange), latency drops from ~30ms (Oslo to Frankfurt) to ~2ms (local).
Configuring a High-Performance GitLab Runner
Don't use the default shell executor. Use the Docker executor with proper volume mounting for caching.
[[runners]]
name = "CoolVDS-NVMe-Runner-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:19.03.5"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Mounting /var/run/docker.sock allows the container to spawn sibling containers, utilizing the host's Docker daemon cache directly. This is critical. Without this, you are downloading the base image for every single job.
The Verdict: Infrastructure is a Feature
You can optimize your Dockerfiles until they are works of art, but if the underlying metal is slow, your pipeline stays slow. Shared hosting environments with "burstable" CPU credits are the enemy of CI/CD. You need consistent, raw compute power.
When we build pipelines for clients dealing with Datatilsynet requirements and high-frequency deployments, we stop playing the cloud lottery. We deploy KVM-based instances where the RAM is dedicated and the NVMe storage screams.
Don't let slow I/O kill your developer velocity. Spin up a specialized CI/CD runner on CoolVDS today and watch those build times drop.