Slash CI/CD Build Times: The NVMe & Architecture Guide for Norwegian DevOps
There is nothing more soul-crushing than pushing a critical hotfix and staring at a spinning progress bar for 25 minutes. If your CI/CD pipeline takes longer than a coffee break, you are bleeding money. I've seen development teams in Oslo treat build times as a "break," but when you aggregate 15 minutes of idle time across 10 developers pushing 5 times a day, you're losing over 12 hours of productivity every single week.
Most tutorials tell you to "optimize your code." That's lazy advice. The real bottleneck in 2019 usually isn't the compilation of your Go binary or the transpilation of your React appβit's the infrastructure underneath it. specifically, Input/Output (I/O) wait times and network latency.
Let's dissect how to cut build times in half using valid strategies available today, from Docker caching tricks to choosing the right virtualization.
The Silent Killer: Disk I/O Latency
CI/CD processes are notoriously I/O heavy. Think about what happens during a standard build:
npm installextracts thousands of small files.- Docker pulls layers and writes them to disk.
- Artifacts are zipped, moved, and unzipped.
If you are running your Jenkins or GitLab runners on a budget VPS with standard SATA SSDs (or heaven forbid, spinning rust), your CPU is spending half its time waiting for the disk to confirm a write. This is where NVMe storage becomes non-negotiable.
In a recent benchmark I ran on a CoolVDS NVMe instance versus a standard cloud provider's "High Performance" block storage, the difference in extracting a node_modules folder was staggering. The NVMe drive finished the operation 4x faster. When you choose a host, ensure you are getting direct-attached NVMe, not network-attached storage that chokes during peak hours.
Docker Layer Caching: Stop Rebuilding Everything
If you aren't leveraging Docker's layer caching, you are doing it wrong. Docker builds images layer by layer. If a layer hasn't changed, Docker uses the cached version. The most common mistake I see in Dockerfiles looks like this:
# BAD PRACTICE
FROM node:10-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
Why is this bad? Because COPY . . copies everything, including your source code. If you change one line of CSS, the cache for that layer breaks. Consequently, the next instruction, RUN npm install, has to execute again from scratch. You are reinstalling dependencies every time you fix a typo.
Here is the optimized approach:
# OPTIMIZED APPROACH
FROM node:10-alpine
WORKDIR /app
# Copy only package files first
COPY package.json package-lock.json ./
# Install dependencies (Created a cached layer)
RUN npm ci
# Copy the rest of the source code
COPY . .
CMD ["npm", "start"]
Now, npm ci only runs if package.json changes. If you just edit application code, Docker skips straight to the final COPY command. This alone can shave 5 minutes off a build.
Pro Tip: Usenpm ciinstead ofnpm installin CI environments. Introduced in npm v6,ci(Clean Install) is faster and more reliable because it strictly follows yourpackage-lock.jsonand deletes the existingnode_modulesfolder before starting, ensuring no phantom dependency issues.
Enabling BuildKit (Experimental)
Docker 18.09 introduced BuildKit, a new build engine that is significantly faster and supports concurrent build stages. It's not on by default yet, but you can enable it easily. It handles unused stages better and processes independent stages in parallel.
To use it in your pipeline, export the variable before running the build command:
export DOCKER_BUILDKIT=1
docker build -t my-app .
The Kernel Bottleneck: File Watchers
If you are running heavy CI workloads, you might hit the Linux kernel limit for file watchers, especially with Webpack or heavy Java applications. When this limit is reached, your build doesn't just slow down; it crashes with obscure errors like ENOSPC.
Don't wait for it to fail. Tune your sysctl.conf on your runner nodes immediately:
# /etc/sysctl.conf
fs.inotify.max_user_watches=524288
fs.file-max=100000
Apply it without rebooting:
sudo sysctl -p
Latency and Sovereignty: The Norwegian Context
For teams based in Oslo, Bergen, or Trondheim, network latency to the build server matters. Pushing a 2GB Docker context to a server in Frankfurt takes time. Pushing it to a server in Oslo via NIX (Norwegian Internet Exchange) takes seconds.
Furthermore, with GDPR fully enforceable since last year, data residency is a massive compliance factor. If your CI/CD pipeline processes production database dumps for staging environments (a common, albeit risky practice), that data is traversing borders. Hosting your CI runners on CoolVDS within Norway keeps that data under Norwegian jurisdiction and Datatilsynet's guidelines, simplifying your compliance posture significantly.
GitLab CI Optimization Example
Here is a snippet for a .gitlab-ci.yml that leverages Docker-in-Docker (dind) efficiently, assuming you are running on a runner with high I/O capabilities:
stages:
- build
variables:
DOCKER_DRIVER: overlay2
build_image:
image: docker:18.09
services:
- docker:18.09-dind
stage: build
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Note the --cache-from flag. This tells Docker to download the previous image and use it as a cache source for the current build. However, this is network-intensive. This is where CoolVDS's unmetered bandwidth and low latency to local docker registries shine.
Infrastructure: Shared vs. Dedicated Resources
Many developers blame Jenkins when the real culprit is "noisy neighbors." On shared hosting or oversold VPS platforms, your CPU might be stolen by another user's runaway PHP script just as you try to compile assets. In a CI environment, consistency is key.
We use KVM (Kernel-based Virtual Machine) at CoolVDS specifically to prevent this. KVM provides hardware virtualization, meaning your RAM and CPU instruction sets are isolated. Unlike container-based virtualization (like OpenVZ or LXC), your CI runner won't suffer because someone else on the host node is getting DDOSed.
| Feature | Generic Cloud VPS | CoolVDS KVM Instance |
|---|---|---|
| Storage | Network Storage (High Latency) | Local NVMe (Zero Wait) |
| Virtualization | Often Shared Kernel | Full KVM Isolation |
| Location | Central Europe (20-40ms) | Norway (1-5ms) |
Conclusion
Optimizing your pipeline is an exercise in removing friction. You remove friction in software by caching dependencies and layering Dockerfiles correctly. You remove friction in hardware by demanding NVMe storage and local connectivity.
Don't let a slow disk be the reason your deployment window closes. If you are serious about DevOps, you need infrastructure that respects your time. Deploy a runner on a CoolVDS NVMe instance today and watch your build times drop.