Stop Treating Your CI/CD Pipeline Like a Coffee Break
I have a rule in my engineering teams: if a commit takes longer than five minutes to build and test, the pipeline is broken. I don't care if the tests pass. If developers are switching contexts or wandering off to the espresso machine because npm install is taking an eternity, you are bleeding money.
We are in late 2018. The days of dragging artifacts across slow spinning rust drives should be over. Yet, I constantly see Jenkins servers configured like it's 2010, choking on disk I/O and struggling with latency issues because someone decided to host their build infrastructure in a cheap US-East data center while their dev team sits in Oslo.
Let's fix this. We are going to look at raw I/O throughput, Docker layer caching, and why your virtualization platform is likely the culprit.
1. The Silent Killer: Disk I/O Wait
Most CI/CD bottlenecks aren't CPU bound; they are I/O bound. When you run docker build or unzip a massive vendor directory, you are hammering the disk. If you are running on a standard VPS provider overselling their SATA SSD arrays, your iowait is likely spiking through the roof.
Run this on your build server while a job is processing:
iostat -xz 1
If your %util is hitting 100% or your await is exceeding 10ms consistently, your storage is too slow. You can tune your software all day, but you cannot code your way out of bad hardware.
This is where CoolVDS differs from the budget commodity hosts. We strictly use local NVMe storage passed through via KVM. We don't use network-attached block storage for root volumes, which eliminates the noisy neighbor effect on disk I/O. In a CI environment, NVMe isn't a luxury; it's a requirement to keep build queues empty.
2. Optimizing Docker Builds with Multi-Stage (The 2018 Standard)
If you aren't using multi-stage builds yet, stop reading and refactor your Dockerfile. Introduced recently in Docker 17.05, this is the single best way to keep your images small and your builds fast.
Stop chaining && rm -rf /var/lib/apt/lists/* in a single layer just to save space. Use a build stage.
Bad Pattern (The Old Way):
FROM node:8
WORKDIR /app
COPY . .
RUN npm install && npm run build
# Result: Massive image containing devDependencies and source code
The Optimized Pattern:
# Stage 1: The Builder
FROM node:10-alpine AS builder
WORKDIR /app
COPY package*.json ./
# 'npm ci' was introduced in npm 5.7 (2018) - use it for deterministic builds
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: The Runtime
FROM node:10-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
Pro Tip: Notice the use ofnpm ciinstead ofnpm install. If you have apackage-lock.json, this command deletesnode_modulesand installs dependencies directly from the lockfile. It is significantly faster and crucial for consistent CI environments.
3. Jenkins & GitLab Runner Tuning
Default configurations are for hobbyists. If you are running GitLab CI or Jenkins, you need to tune the concurrency and cache drivers.
For GitLab Runners using the Docker executor, ensure you are using the overlay2 storage driver. The older aufs driver is slower and deprecated. Check your /etc/docker/daemon.json:
{
"storage-driver": "overlay2"
}
Furthermore, if you are hosting your own runners (which you should, for security and speed), you need to manage the cleanup. Docker objects accumulate rapidly. Don't rely on the garbage collector alone. I use a simple cron job on my build nodes to keep the NVMe drive fresh:
#!/bin/bash
# /usr/local/bin/docker-cleanup.sh
# Remove unused containers
docker container prune -f
# Remove dangling images (layers that have no relationship to any tagged images)
docker image prune -f
# Remove unused volumes (Be careful with this one in persistent envs)
# docker volume prune -f
4. The Norway Factor: Latency and Data Sovereignty
We are seeing stricter enforcement from Datatilsynet regarding where data is processed. With GDPR fully enforceable since May, relying on US-based build servers to process databases containing Norwegian customer data (even test data) is a compliance minefield.
Beyond compliance, there is physics. If your dev team is in Oslo or Bergen, pushing gigabytes of Docker images to a registry in Frankfurt or Virginia introduces latency.
Round Trip Time (RTT) matters:
| Origin | Destination | Avg Latency | Impact on 1GB Push |
|---|---|---|---|
| Oslo (Fiber) | CoolVDS (Oslo/NIX) | ~2-5 ms | Negligible |
| Oslo (Fiber) | AWS (London) | ~25-30 ms | Noticeable lag |
| Oslo (Fiber) | US East (N. Virginia) | ~90-110 ms | Painful |
Hosting your GitLab instance or Jenkins master on a CoolVDS instance in Norway keeps your traffic local. You benefit from direct peering at NIX (Norwegian Internet Exchange), meaning your git pushes and docker pulls happen at line speed, not trans-Atlantic speed.
5. Security in the Pipeline
In 2018, we can no longer ignore security scanning. However, scanning tools are resource-intensive. Running a tool like SonarQube or Clair requires significant RAM.
Do not attempt to run these on a 1GB RAM instance. The Java process will OOM (Out of Memory) kill your build. For a robust pipeline including static analysis, I recommend a minimum of 8GB RAM and 4 vCPUs. This ensures the scanner doesn't starve the build process.
On CoolVDS, we guarantee dedicated CPU cycles. Unlike shared hosting where your "vCPU" is a timeshare that gets throttled when you need it most, our KVM architecture ensures that when your pipeline demands 100% CPU for compilation, you get it.
Summary: The Path to Sub-Minute Builds
- Upgrade to NVMe: Mechanical drives are dead for CI/CD.
- Localize Infrastructure: Keep your build servers close to your developers and your production targets in Norway.
- Refactor Dockerfiles: Use multi-stage builds and
npm ci. - Monitor I/O: If
iowait> 5%, move to a better provider.
Your infrastructure shouldn't be the reason your team misses a deadline. If you are tired of watching a spinning loading bar, it is time to upgrade the engine underneath.
Ready to cut your build times in half? Deploy a high-performance NVMe instance on CoolVDS today and experience the difference raw power makes.