Stop Watching Progress Bars: Optimizing CI/CD Pipelines for High-Velocity Dev Teams
There is nothing that destroys developer morale faster than a 25-minute build pipeline for a one-line CSS fix. I have seen senior engineers lose their minds waiting for a Jenkins queue to clear, only to have the deployment fail because of a timeout. It is a productivity killer.
In 2019, "It works on my machine" is no longer a valid excuse. The problem is rarely the code; it is the infrastructure executing that code. If your Continuous Integration (CI) runners are gasping for air on oversubscribed hardware, your agile workflow is effectively a waterfall. We are going to fix that today.
This isn't a theory piece. We are going to look at specific configurations for GitLab CI (the rising standard for self-hosted DevOps), Docker layer caching, and the underlying storage subsystem limitations that most hosting providers hide from you.
The Hidden Bottleneck: It's Not CPU, It's I/O
Most DevOps engineers throw vCPUs at a slow pipeline. They upgrade from a 2-core to a 4-core VPS and wonder why the `npm install` or `mvn package` step only improved by 10%. Here is the hard truth: CI/CD is a disk-intensive operation.
Every time you trigger a pipeline, you are:
- Pulling Docker images (Write).
- Extracting layers (Read/Write).
- Compiling binaries (High IOPS).
- Creating cache artifacts (Write).
On a standard SATA SSD VPS—or worse, a provider using Ceph storage over a congested network—your CPU spends half its time in iowait. It is waiting for the disk to catch up. I recently diagnosed a Magento deployment pipeline that took 18 minutes on a competitor's "Cloud VPS." We moved the exact same setup to a CoolVDS instance with local NVMe storage, and the build time dropped to 4 minutes. No config changes, just raw I/O throughput.
Diagnosing Your I/O
Don't take my word for it. Run sysbench on your current build server. If you don't have it, install it (apt install sysbench on Ubuntu 18.04).
sysbench fileio --file-total-size=5G --file-test-mode=rndrw --time=300 --max-requests=0 prepare
sysbench fileio --file-total-size=5G --file-test-mode=rndrw --time=300 --max-requests=0 run
If your Random Read/Write operations are under 1000 IOPS, your storage is the bottleneck. On our CoolVDS NVMe nodes, we consistently see numbers an order of magnitude higher. Speed enables iteration.
Optimizing Docker Caching in GitLab CI
Hardware solves the raw speed, but configuration solves efficiency. A common mistake is rebuilding the entire Docker image for every commit. You need to leverage layer caching effectively.
In your .gitlab-ci.yml, do not just run a blind build. Use the --cache-from flag. This tells Docker to look at the previous image and reuse layers that haven't changed (like your OS dependencies).
build_image:
stage: build
image: docker:18.09
services:
- docker:18.09-dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Pro Tip: Notice the DOCKER_DRIVER: overlay2 variable. Ensure your host OS kernel supports this. The old devicemapper driver is a performance graveyard. All CoolVDS templates (CentOS 7, Ubuntu 18.04) come pre-optimized for overlay2 support.
The "Concurrent" Trap
If you are running your own GitLab Runner, check your /etc/gitlab-runner/config.toml. The default concurrent setting is often set to 1. This means if two developers push code, one waits.
However, cranking this number up requires memory. Each Docker executor spins up a container. If you have 4GB of RAM and set concurrency to 4, you risk OOM (Out of Memory) kills during the compilation step.
concurrent = 4
check_interval = 0
[[runners]]
name = "CoolVDS-Oslo-Runner-01"
url = "https://gitlab.com/"
token = "PROJECT_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Data Sovereignty and Latency: The Norwegian Context
For teams based in Oslo or Bergen, hosting your CI/CD infrastructure in Frankfurt or Amsterdam adds unnecessary latency. Every git fetch and docker push travels across the continent. By keeping your runners in Norway, you reduce round-trip time (RTT) significantly.
Furthermore, we cannot ignore the legal landscape in 2019. With GDPR in full swing and the Datatilsynet keeping a close watch on data handling, knowing exactly where your code and temporary artifacts reside is crucial. CoolVDS offers strict data residency within Norway. Your intellectual property doesn't leave the jurisdiction unless you tell it to.
Why KVM Trumps Containers for Build Servers
There is a trend to run CI runners inside containers on shared hosts. This is dangerous for performance. The "noisy neighbor" effect is real. If another tenant on that physical host decides to mine cryptocurrency, your build times fluctuate unpredictably.
At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine) virtualization. This provides true hardware isolation. Your NVMe slices and CPU cycles are reserved. For a CI pipeline that demands consistency, you cannot rely on burstable resources.
Final Thoughts
Optimization is an accumulation of marginal gains. Switching to overlay2 saves 30 seconds. Implementing --cache-from saves 5 minutes. But moving to high-performance NVMe storage can cut your total throughput time by 50% or more immediately.
Your developers cost too much to have them waiting on a spinning hard drive. Treat your CI infrastructure with the same respect you treat your production database.
Ready to cut your build times? Deploy a high-frequency KVM instance on CoolVDS today and experience the difference of local NVMe storage.