Stop Watching the Progress Bar: Engineering a Faster CI/CD Pipeline
There is nothing more soul-crushing in our line of work than pushing a commit and waiting 25 minutes for the build light to turn green. It breaks your flow. It encourages context switching. And frankly, in 2018, it is inexcusable. If your team is deploying microservices but your infrastructure is running on shared spindles in a chaotic data center across the Atlantic, you are bleeding productivity.
I have spent the last month auditing pipelines for a fintech client here in Oslo. They were running Jenkins on a budget cloud provider. Their complaints? Flaky tests and timeout errors during npm install. The root cause wasn't their code; it was the underlying infrastructure choking on I/O wait times.
Let's dissect the anatomy of a slow pipeline and fix it using technologies that actually work: Docker, aggressive caching, and proper virtualization.
1. The Silent Killer: Disk I/O and I/O Wait
Most developers treat a VPS (Virtual Private Server) as just a bucket of CPU and RAM. This is a fatal error. CI/CD processes are heavily I/O bound. When you run docker build, maven package, or unzip a massive node_modules folder, you are hammering the disk.
On traditional cloud hosts using HDD or SATA SSDs, your "Neighbor"—the guy running a crypto miner or a high-traffic database on the same physical host—can starve your read/write operations. This is why your build takes 4 minutes at 9 AM and 12 minutes at 2 PM.
You need to verify your disk performance. Don't guess. Run fio on your current build server.
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite --ramp_time=4
If you aren't seeing IOPS (Input/Output Operations Per Second) in the thousands, your storage is the bottleneck. This is why at CoolVDS we standardized on NVMe storage for all instances. The protocol difference between AHCI (SATA) and NVMe is massive when handling the thousands of small files typical in a build artifact.
2. Docker Layer Caching: You're Doing It Wrong
Docker is the standard now. If you aren't using it, you should be. But merely containerizing your app isn't enough. I see Dockerfiles like this constantly:
FROM node:8
COPY . /app
WORKDIR /app
RUN npm install
CMD ["node", "server.js"]
Every time you change a single line of code in your source files, Docker invalidates the cache for the COPY . /app layer. Consequently, the RUN npm install layer runs again. Every. Single. Time. You are downloading the internet just to fix a typo.
Refactor your Dockerfile to leverage the build cache mechanism effectively. Copy the dependency definitions first.
FROM node:8-alpine
WORKDIR /usr/src/app
# Only copy package.json first to leverage cache
COPY package.json package-lock.json ./
# This layer is only rebuilt if dependencies change
RUN npm install --production --silent
# Now copy the rest of your code
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
With this structure, if you change your application logic but not your dependencies, Docker skips the install step entirely. On a CoolVDS NVMe instance, this cache check happens in milliseconds.
Multi-Stage Builds
Since Docker 17.05, we have multi-stage builds. This allows us to keep the final image tiny by discarding build tools. This reduces the network transfer time when pushing to your registry.
# Build Stage
FROM golang:1.9 as builder
WORKDIR /go/src/app
COPY . .
RUN go get -d -v ./...
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Production Stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/app/main .
CMD ["./main"]
3. The Network Factor: Latency and GDPR
We are approaching May 2018. The General Data Protection Regulation (GDPR) enforcement date is looming. If your CI/CD pipeline handles production data sanitization or database dumps, you need to know where that data lives. Sending Norwegian customer data to a build server in Virginia (US-East) is becoming a legal minefield, despite Privacy Shield.
Beyond compliance, there is physics. If your developers are in Oslo or Bergen, and your deployment target is a Nordic data center, why route your builds through Frankfurt or London?
Latency Matters:
Ping from Oslo to Amsterdam: ~18ms
Ping from Oslo to CoolVDS (Oslo): ~2ms
When you are rsyncing gigabytes of build artifacts or pushing Docker images, that latency adds up. Keeping traffic local via NIX (Norwegian Internet Exchange) ensures high throughput and stability.
Pro Tip: Configure your local /etc/hosts or internal DNS to route internal traffic over private interfaces if your provider supports it. We enable private networking on CoolVDS to allow your GitLab Runner to talk to your staging environment without hitting the public internet, reducing attack surface and latency.
4. Tuning the Kernel for Build Loads
Linux out of the box is tuned for general-purpose usage, not high-throughput build servers. If you are managing your own Jenkins or GitLab Runner nodes, you need to tweak sysctl.conf.
Ephemeral ports can run out during heavy parallel testing. Increase the range:
# /etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_reuse = 1
vm.swappiness = 10
fs.file-max = 2097152
We specifically recommend lowering vm.swappiness. You want your build process in RAM. If it touches swap, your performance falls off a cliff. On our infrastructure, we provide guaranteed RAM allocation, so you don't have to worry about the balloon driver stealing your memory for another tenant.
5. Automating the Infrastructure
Don't configure these servers by hand. It's 2018. Use Ansible. Here is a quick playbook snippet to ensure Docker is installed and tuned on your CoolVDS Ubuntu 16.04 instance.
---
- hosts: build_servers
become: true
tasks:
- name: Install Docker prerequisites
apt:
name: ['apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common']
state: present
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
- name: Install Docker CE
apt:
name: docker-ce
state: present
update_cache: yes
Conclusion
Optimization is not about magic; it's about removing friction. Friction from spinning disks, friction from network hops, and friction from bad caching logic. By moving your CI/CD infrastructure to local, NVMe-backed VPS Norway solutions, you solve the hardware variable.
The rest is up to you and your Dockerfile. Don't let slow I/O kill your deployment frequency. Deploy a high-performance runner on CoolVDS today and get your build times down to where they belong.