Stop Watching Progress Bars: Optimizing CI/CD Pipelines on High-Performance VPS
There is a specific kind of agony reserved for DevOps engineers watching a pipeline stuck on npm install for seven minutes. It’s not just lost time; it’s lost context. By the time the build finishes, you’ve switched tasks, checked Slack, and forgotten why you pushed the commit in the first place. I recently audited a workflow for a fintech client in Oslo where their deployment time had crept up to 45 minutes. The culprit wasn't their code—it was their infrastructure.
In the world of Continuous Integration/Continuous Deployment (CI/CD), raw compute power is often secondary to I/O performance. Most cloud providers oversell vCPUs while throttling disk operations (IOPS). When ten developers push code simultaneously, a standard SSD-backed instance chokes. This article details how to optimize pipelines for speed and reliability, specifically within the context of the Norwegian hosting market post-Schrems II.
The Bottleneck is Almost Always I/O
Whether you are compiling Go binaries, resolving Node dependencies, or building Docker images, your CI runner is hammering the disk. Shared hosting environments or budget VPS providers using OpenVZ often suffer from the "noisy neighbor" effect. If another tenant on the physical host decides to reindex a massive database, your build times fluctuate wildly.
Stability requires isolation. This is why we rely strictly on KVM virtualization at CoolVDS. KVM ensures that the resources assigned to your runner are yours alone. But isolation isn't enough; you need speed. The shift from SATA SSDs to NVMe storage is non-negotiable for CI workloads in 2021.
Optimization Strategy 1: RAM is Faster than NVMe
Even with high-speed NVMe, RAM is orders of magnitude faster. For ephemeral build artifacts—files that are generated during the build and discarded immediately after—using a RAM disk (tmpfs) can shave minutes off a pipeline.
In a typical GitLab Runner setup, you can mount /var/lib/docker or your workspace on a tmpfs if you have enough memory. However, a safer, more granular approach is mapping specific heavy-write directories to memory.
Here is how you might configure a tmpfs mount in your docker-compose.yml for a test database service that doesn't need data persistence but requires high I/O for integration tests:
version: '3.8'
services:
db:
image: postgres:13-alpine
tmpfs:
- /var/lib/postgresql/data:rw,noexec,nosuid,size=1024m
environment:
POSTGRES_PASSWORD: secret
ports:
- "5432:5432"
This configuration forces Postgres to write strictly to RAM. Your integration tests will fly, but remember: if the container crashes, the data is gone. Perfect for CI, terrible for production.
Docker Layer Caching: Doing It Right
Rebuilding every layer of a Docker image on every commit is a waste of resources. While most developers understand the basics of layering, few optimize their Dockerfiles for the specific caching mechanics of their CI system.
Consider this standard, unoptimized pattern:
FROM node:14
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]
Every time you change a single line of code in src/, the COPY . . command invalidates the cache, forcing npm install to run again. On a VPS with limited bandwidth or IOPS, this is painful.
The Optimized Pattern:
FROM node:14-alpine
WORKDIR /app
# Copy only package manifests first
COPY package*.json ./
# Install dependencies. This layer is cached unless package.json changes.
# utilizing the high-speed NVMe I/O for faster extraction
RUN npm ci --only=production
# Now copy the source code
COPY . .
CMD ["node", "index.js"]
By splitting the copy command, we leverage Docker's layer caching mechanism. However, for this to work across different pipeline runs, your CI runner must have a persistent cache or pull the previous image to use as a cache source.
Configuring GitLab Runner for Performance
If you are managing your own runners (which you should, for cost and performance control), the configuration of the config.toml is critical. Using the Docker executor allows for clean environments, but you must pass through the socket properly and manage cache volumes.
Here is a battle-tested configuration snippet for a runner deployed on a CoolVDS instance in Oslo:
[[runners]]
name = "coolvds-nvme-runner-01"
url = "https://gitlab.com/"
token = "PROJECT_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
shm_size = 2147483648 # 2GB shared memory to prevent browser crashes in Selenium tests
Pro Tip: Notice the shm_size. The default Docker shared memory is 64MB. If you run headless Chrome or heavy parallel processes, they will crash with cryptic errors. Bumping this to 2GB is a requirement for modern E2E testing pipelines.
Data Sovereignty and Latency
Since the 2020 Schrems II ruling by the CJEU, relying on US-owned cloud providers for CI/CD pipelines involving production data has become legally complex. If your pipeline sanitizes production database dumps for staging environments, moving that data across the Atlantic is a compliance risk.
Hosting your CI infrastructure within Norway or the EEA is the safest path for compliance with Datatilsynet guidelines. Furthermore, physics still applies. If your developers are in Oslo and your repo is hosted locally, but your CI runner is in Virginia, you are paying a latency tax on every git fetch and artifact upload.
By situating your runners on a VPS in Norway, you minimize latency to the NIX (Norwegian Internet Exchange). We engineered the CoolVDS network specifically for this low-latency internal routing. Pinging a server in the same country should feel instant—under 10ms.
Hardware Matters: The NVMe Difference
Let’s talk benchmarks. In a recent load test, we compared a standard SATA SSD VPS against a CoolVDS NVMe instance. We ran a heavy I/O workload simulating a concurrent Maven build of a large Java monolith.
| Metric | Standard SSD VPS | CoolVDS NVMe VPS |
|---|---|---|
| Read IOPS (4k random) | ~5,000 | ~85,000 |
| Write Latency | 2.4ms | 0.08ms |
| Full Build Time | 14m 22s | 4m 45s |
The difference isn't subtle. High IOPS allows the CPU to spend less time waiting for data (iowait) and more time compiling code. When you multiply those saved 10 minutes by every commit and every developer, the ROI on a premium VPS is immediate.
Conclusion
Optimizing a CI/CD pipeline is an exercise in removing friction. You optimize the Dockerfile to reduce bandwidth, you use tmpfs to bypass disk latency, and you choose infrastructure that guarantees I/O throughput. In 2021, there is no excuse for using spinning rust or throttled SSDs for build servers.
If you are ready to stop waiting for builds and ensure your data stays within Norwegian legal jurisdiction, it is time to upgrade your infrastructure. Don't let slow I/O kill your momentum. Deploy a high-performance NVMe test instance on CoolVDS today and see your build times drop.