Stop Watching Progress Bars: Brutal CI/CD Optimization for Nordic Dev Teams
I walked into a development shop in Oslo last week. The coffee machine was the busiest node in the office. Why? "Waiting for the pipeline," the lead backend engineer told me. They were pushing a hotfix for a Magento checkout bug, and the build-test-deploy loop was taking 24 minutes.
That is unacceptable. In 2021, if your feedback loop exceeds five minutes, you aren't doing DevOps; you're doing "Deployment Theater."
We need to talk about physics. We need to talk about I/O. And we need to talk about why your shared cloud runners are killing your velocity.
1. The Hidden I/O Bottleneck
Most developers treat CI/CD optimization as a software problem. They tweak their webpack configs or parallelize tests. Those are good steps. But they miss the elephant in the server room: Disk I/O.
Think about what a CI job actually does:
git clone(Disk Write)docker pull(Network + Massive Disk Write/Extraction)npm install/composer install(Thousands of small file Writes)- Artifact archiving (Disk Read/Write)
If you are running this on standard magnetic storage or even cheap, network-throttled SATA SSDs (common in budget VPS providers), your CPU is spending half its time in iowait. You are paying for compute cycles that are just sitting there, waiting for the disk controller to catch up.
Pro Tip: Runiostat -x 1on your current runner during a build. If%utilhits 100% while%useris below 50%, your storage is the bottleneck. This is why CoolVDS standardizes on local NVMe storage. We don't throttle IOPS because we know `npm install` needs to breathe.
2. Docker Layer Caching: You're Doing It Wrong
I see `Dockerfile` setups that destroy cache validity on line 3. Every time you change a line of code, Docker invalidates that layer and every layer after it.
Here is the golden rule: Copy dependencies first, source code second.
The "Slow" Way (Common):
FROM node:14-alpine
WORKDIR /app
COPY . .
# If I change one CSS file, this layer invalidates
RUN npm ci
CMD ["node", "server.js"]
The "Fast" Way (Optimized):
FROM node:14-alpine
WORKDIR /app
# Only copy package definitions first
COPY package.json package-lock.json ./
# This layer is cached unless dependencies change
RUN npm ci
# NOW copy the source code
COPY . .
CMD ["node", "server.js"]
By splitting the copy command, npm ci (which is heavy on network and disk) only runs when you actually add a new library. For 95% of commits, Docker serves this from the cache instantly.
3. Configuring a Private Runner on CoolVDS
Shared runners (like GitHub Actions free tier or GitLab.com shared runners) are convenient but inconsistent. You suffer from "noisy neighbors"—other users stealing CPU cycles. Plus, with the Schrems II ruling last year making data transfers to US-owned clouds legally risky, hosting your runners on Norwegian soil is not just a performance play; it's a compliance necessity.
Here is how to deploy a lightning-fast GitLab Runner on a CoolVDS instance running Ubuntu 20.04.
Step 1: Install Docker & The Runner
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
# Add the GitLab Runner repo
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
# Install the runner (ensure version matches your GitLab instance, likely 13.x)
sudo apt-get install gitlab-runner
Step 2: The Critical `config.toml` Tweak
The default configuration is too conservative. We need to enable the Docker socket binding (carefully) for caching and set concurrency limits that match our vCPUs.
Edit /etc/gitlab-runner/config.toml:
concurrent = 4 # Match this to your CoolVDS CPU core count
check_interval = 0
[[runners]]
name = "CoolVDS-NVMe-Runner-01"
url = "https://gitlab.com/"
token = "YOUR_REGISTRATION_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Mounting /var/run/docker.sock allows the container inside the runner to spawn sibling containers rather than children. This is dangerous in public clouds, but on a private VPS dedicated to your team, it allows you to persist Docker image layers on the host system. Your subsequent builds don't need to re-download base images.
4. The Network Factor: Latency to NIX
If your team is in Oslo or Bergen, pushing code to a server in Frankfurt or Virginia adds latency. It might seem small (20ms vs 100ms), but CI pipelines are "chatty." They make thousands of small requests.
| Feature | Shared Cloud Runner | CoolVDS Private Runner |
|---|---|---|
| Disk Type | Network Storage (Variable Latency) | Local NVMe (Consistent High IOPS) |
| Cache Persistence | Cleared after job (usually) | Persists on Host (Instant reuse) |
| Data Sovereignty | Uncertain (US Cloud Act?) | 100% Norway / GDPR Compliant |
| Cost Predictability | Per-minute billing | Flat monthly rate |
5. Advanced: Distributed Caching with MinIO
If you scale beyond one node, local caching fails. You need a distributed cache. Since we are avoiding US clouds due to GDPR, self-hosting MinIO (S3 compatible object storage) on a separate CoolVDS instance is the robust solution.
In your `.gitlab-ci.yml`, you point the runner to your local MinIO instance. This keeps all artifacts within your private network, usually over a high-speed internal LAN if you provision the VPSs in the same datacenter.
# .gitlab-ci.yml snippet
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
- .next/cache/
Combining this with a local MinIO backend ensures that even if you wipe the runner, your heavy node_modules folder is pulled from a local server, not from the public npm registry.
The Verdict
Speed is a feature. If your developers are waiting 20 minutes to see if they broke the build, they will commit code less often. They will merge larger, riskier PRs. Quality will degrade.
You don't need a massive Kubernetes cluster to fix this. You need raw, unadulterated I/O speed and a network topology that respects the laws of physics.
Next Step: Stop renting slow, shared cycles. Spin up a High-Performance NVMe VPS on CoolVDS today. Install a runner, bind the socket, and watch your build times drop.