Stop Watching Progress Bars: Engineering a Sub-Minute CI/CD Pipeline in Norway
There is nothing more soul-crushing in this industry than pushing a commit and staring at a spinning yellow circle for 15 minutes. In that time, I’ve lost my train of thought, brewed a third cup of coffee, and probably started browsing Hacker News. By the time the pipeline turns green (or red), the context switch cost has already been paid.
If you are running a dev shop in Oslo or Bergen, you can't afford this inefficiency. We aren't just talking about wasted billable hours; we are talking about developer sanity.
Most tutorials tell you to "optimize your code." That’s useless advice. The real bottleneck in 90% of CI/CD pipelines isn't the compilation of Go or Rust binaries; it is the infrastructure underneath it gasping for air. I've spent the last decade debugging pipelines that hang on `npm install` not because the network is slow, but because the disk I/O is saturated by a "noisy neighbor" on a cheap shared VPS.
Here is how we fix it. No fluff, just engineering.
1. The Hidden Killer: Disk I/O Wait
CI/CD is inherently I/O intensive. You are pulling Docker images, extracting layers, writing cache artifacts, and generating binaries. If your runner is hosted on a standard HDD or a throttled SSD, your CPU is sitting idle while it waits for the disk.
Run this during your next build on your current infrastructure:
iostat -x 1 10
If you see your %iowait spiking above 5-10%, your storage is the bottleneck. This is common on cloud providers that oversell their storage arrays.
This is where the architecture of the hosting provider matters. At CoolVDS, we enforce strict KVM isolation and run exclusively on NVMe arrays. We don't just say "SSD"; we mean direct-attached NVMe storage where IOPS aren't capped artificially low. When a build runner requests a read, it happens instantly.
2. Docker Layer Caching: You're Doing It Wrong
I still see senior engineers making the mistake of copying their entire source code into the container before installing dependencies. This breaks the cache every time you change a single line of code/comment.
Here is the wrong way:
FROM node:18-alpine
WORKDIR /app
COPY . .
# Every code change forces this heavy layer to rebuild
RUN npm ci
CMD ["node", "server.js"]
Here is the production-grade way to structure your Dockerfile for caching:
FROM node:18-alpine AS builder
WORKDIR /app
# 1. Copy only manifests first
COPY package.json package-lock.json ./
# 2. Install dependencies (Cached unless manifests change)
RUN npm ci --quiet
# 3. Copy source code
COPY . .
# 4. Build
RUN npm run build
# Final minimal stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/main.js"]
By splitting the copy command, Docker reuses the layer with node_modules unless you actually add a dependency. On a high-performance CoolVDS instance, pulling this cached layer takes milliseconds.
3. Distributed Caching with GitLab CI
If you are using GitLab CI (a favorite here in the Nordics), you must configure distributed caching properly. Local caching on the runner works until you scale to multiple runners.
For your .gitlab-ci.yml, ensure you are caching the specific directories where your package manager stores downloaded files, not just the installed folder.
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- node_modules/
build_job:
stage: build
script:
- npm ci --cache .npm --prefer-offline
tags:
- coolvds-nvme-runner
Pro Tip: Network latency matters for cache restoration. If your runners are in Frankfurt but your S3 cache bucket is in Oslo, you are fighting physics. Keep your compute and your object storage in the same region. CoolVDS offers localized peering options that dramatically reduce the RTT (Round Trip Time) between your runner and your storage.
4. Data Sovereignty and The "Schrems II" Reality
We cannot ignore the legal landscape in August 2023. The Norwegian Datatilsynet is vigilant. If your CI/CD pipeline processes production database dumps for staging environments, that data is technically being "processed."
If you are using a US-based cloud provider for your runners, you might be inadvertently transferring personal data (GDPR artifacts) across borders without adequate protection mechanisms. Hosting your CI runners on Norwegian soil—or at least strictly within the EEA on infrastructure owned by European entities—simplifies compliance significantly.
CoolVDS infrastructure is fully compliant with European data residency requirements. We aren't shipping your build logs to a data center in Virginia.
5. Tuning the Runner Configuration
The default configuration for a GitLab Runner is rarely optimized for high-concurrency. You need to adjust the concurrent and limit settings in your config.toml based on the CPU cores available on your CoolVDS instance.
A good rule of thumb for a 4 vCPU CoolVDS instance:
concurrent = 4
check_interval = 0
[[runners]]
name = "coolvds-fast-runner"
url = "https://gitlab.com/"
token = "REDACTED"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:24.0.5"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
# Use overlay2 for best filesystem performance
storage_driver = "overlay2"
Note the storage_driver = "overlay2". This is the current standard for Docker storage drivers. If you are on an older kernel or legacy VPS that forces you into devicemapper, you are losing massive amounts of performance.
6. The Deployment Handshake
Finally, pushing the artifact. Using scp is fine for hobbyists. For professional pipelines, use rsync over SSH with specific flags to minimize data transfer. We only want to ship the binary deltas.
#!/bin/bash
# Deploy script snippet
ssh-keyscan -H $PRODUCTION_IP >> ~/.ssh/known_hosts
rsync -avz --delete \
-e "ssh -i $SSH_KEY" \
./dist/ \
user@$PRODUCTION_IP:/var/www/app/
When you target a CoolVDS instance as your production destination, the low latency to NIX (Norwegian Internet Exchange) ensures this transfer is lightning fast. We have seen deployments drop from 45 seconds to 3 seconds just by moving the destination server to a network with better peering in Oslo.
Conclusion
A slow pipeline is a choice. It is a choice to tolerate poor I/O, suboptimal caching, and network latency.
You don't need a massive Kubernetes cluster to fix this. You need rigorous caching strategies and infrastructure that respects your need for speed. By moving your build runners to CoolVDS, you gain the raw NVMe throughput and isolation required to turn those 15-minute waits into 60-second victories.
Stop waiting. Spin up a dedicated CI runner on CoolVDS today and get back to coding.