Console Login

Stop Watching Paint Dry: Accelerating CI/CD Pipelines in a Post-Safe Harbor World

Stop Watching Paint Dry: Accelerating CI/CD Pipelines in a Post-Safe Harbor World

If I have to wait twenty minutes for a Jenkins build to fail because of a timeout error one more time, I might just pull the server rack out of the wall. We talk about "continuous integration" as if it's a magic wand for productivity, but in reality, most dev teams in Oslo are spending half their day staring at progress bars.

I’ve seen the pattern a dozen times. You start with a clean git push. Hooks fire. Then, the silence. The spinning wheel. The waiting. Most systems administrators throw more vCPUs at the problem, assuming Java or GCC is eating the cycles. They are wrong.

The bottleneck is almost always Disk I/O and latency. And with the recent collapse of the Safe Harbor agreement, where you host that pipeline matters just as much as how you configure it.

The Hidden Killer: I/O Wait

Let's look at a war story from last month. A client running a heavy Magento stack complained their deployment pipeline was taking 45 minutes. They were on a generic "cloud" provider—one of the big American ones.

I logged in and ran top. CPU usage was 15%. Memory was fine. But then I looked at %wa (iowait). It was spiking to 60%.

Every time composer install or npm install ran, the server tried to write thousands of tiny files to a disk that was being shared by five hundred other noisy neighbors. The IOPS (Input/Output Operations Per Second) were capped, choking the build process.

Here is how you diagnose this on your current build server:

# Install sysstat if you haven't already (CentOS/RHEL)yum install sysstat# Run iostat to see extended device statistics every 1 second iostat -x 1

If your await (average time for I/O requests to be served) is consistently over 10ms, your storage is garbage. You can tune your database config all day, but if the physical disk can't keep up, you are dead in the water.

Pro Tip: Moving this client to a CoolVDS instance with pure NVMe storage dropped the build time from 45 minutes to 12 minutes. No code changes. Just disk speed. NVMe isn't just a buzzword; for operations involving thousands of small reads/writes (like compiling code), it is mandatory.

Docker Caching: Do It Right

Docker is rapidly becoming the standard for isolation, especially with the release of version 1.9 earlier this month. However, I see too many Dockerfiles written without understanding the layer cache.

If your Dockerfile looks like this, you are wasting time:

FROM node:0.12ADD . /appRUN npm install

Every time you change a single line of code in your source, Docker invalidates the cache for the ADD command, forcing npm install to run again from scratch. That is bandwidth and I/O suicide.

Structure it like this instead:

FROM node:0.12WORKDIR /app# Copy only package.json firstCOPY package.json /app/# Install dependencies. This layer is cached unless package.json changesRUN npm install# Now copy the rest of your codeADD . /app

This ensures that unless you actually add a dependency, the heavy lifting is cached. It sounds simple, but I audit pipelines every week that miss this fundamental concept.

The